Featured

Export Data from D365 FO using Synapse Link

Microsoft Introduced Azure Synapse Link for Dataverse to export data from D365 finance and operations apps into Azure Synapse Analytics or Azure Data Lake Storage Gen2. The current version supports both D365FO Entites and raw tables. The previous technology such as Bring Your Own Database (BYOD) and ‘Export To DataLake’ needs to be transitioned to ‘Synapse Link for Dataverse’. The aim is to unify the approach to export the data from the Dynamics 365 platform, regardless of whether it is D365CE or D365FO. The new feature provides scalability, high availability, and disaster recovery capabilities.

The Azure Synapse Link for Dataverse allows you to choose standard and custom finance and operations entities and tables. It supports continuous entity and table data replication, including create, update, and delete (CUD) transactions. It provides an option to link or unlink the D365FO environment to Azure Synapse Analytics and/or Data Lake Storage Gen2 in your Azure subscription without the need for external tools or configuration. Data is stored in the Common Data Model format, which provides semantic consistency across apps and deployments.

Export Dynamics 365 data using Azure Synapse Link using Azure data lake, Synapse and spark

Azure Synapse Link for Dataverse offers the following features that you can use with finance and operations data:

  1. Both standard and custom Entities and Tables can be exported. The older techniques such as BYOD supported only Entities and ‘Export to Data Lake’ supported only raw tables. This feature provides supportability for both BUT the entity support is still being limited to a few entities. The supportability of the entities is growing with each release of D365FO.
  2. Continuous replication of Data: Create, update, and delete (CUD) transactions of records are continuously pushed to Azure Data Lake. This feature doesn’t impact the performance of D365FO, unlike BYOD.
  3. Possible to link or unlink the D365FO environment to multiple Azure Synapse Workspace and/or Data Lake Storage Gen2 in your Azure subscription. The Data Lake and Workspace should be in the same region as the D365FO environment.
  4. The data can be exported in both CSV and Parquet Delta Lake format. Parquet Delta format is recommended due to its better read performance, support for ACID and Schema evolution/drift. ACID transactions ensure data consistency even in the face of concurrent read and write operations.
  5. The number of tables that can be exported in the Export to Data Lake feature limit isn’t applicable in Azure Synapse Link for Dataverse.

There are three options to export D365FO Tables while exporting data using Synapse Link.

This option exports D365FO data to Azure Data Lake in CSV format within your Azure subscription. This mirrors the familiar “Export to Datalake” feature in D365FO. The data resides in Azure Data Lake, enabling downstream applications to build their data pipelines for optimal consumption. Organizations with dedicated Data and Analytics capabilities may find this option ideal. Costs are primarily associated with Data Lake storage, with additional expenses tied to downstream data read, copy, and transformation. Integration with Synapse and Apache Spark pool is optional for this option.

Export Dynamics 365 data  to Azure Data Lake in CSV Format (BYOL CSV) using Azure Synapse Link

Choose this option to export D365FO data to Azure Data Lake but in the Delta Parquet format. This format enhances efficiency, and query performance and supports ACID transactions and Schema evolution/drift. To enable this conversion, provisioning a “Synapse Workspace” and an Apache Spark pool within your Azure subscription is necessary for the CSV-to-Delta format transformation. Similar to Option 1, downstream applications retain the flexibility to customize their pipelines to suit their specific needs. Costs encompass Data Lake storage, Azure Synapse workspace, and Apache Spark pool expenses. However, it’s important to note that frequent delta conversions with a high volume of data changes might lead to substantial Apache Spark pool costs. Additionally, downstream data consumption and conversion expenses should also be considered. This option ensures enhanced performance and adaptability but requires careful monitoring to manage associated costs effectively.

Export Dynamics 365 data  to Azure Data Lake in  Delta Parquet Format using Azure Synapse Link

For a streamlined solution, opt to export D365FO data directly to OneLake. This approach minimizes the need for downstream applications to manage data conversions or copying processes. OneLake acts as a centralized repository, simplifying data access and utilization. This option is well-suited for organizations seeking a seamless and integrated data export process. Costs are associated with OneLake storage (This would be part of the Dataverse storage cost). Here you don’t have the Azure Synapse workspace and Apache Spark pool expenses. However, it’s important to note that OneLake support is Limited to D365 FO and CE entities.

Export Dynamics 365 data  to OneLake in  Delta Parquet Format using Azure Synapse Link
Criteria(Option1) Synapse link BYOL (CSV)(Option2) Synapse link BYOL (Delta)(Option3) Microsoft OneLake with Fabric
Ease of AdministrationCustomer managed PaaS resources – Storage accountCustomer managed PaaS resources – Storage account, Synapse & Spark PoolManaged by Microsoft
SecurityFirewall support on storage accountFirewall support on storage account and synapse workspaceManaged by Microsoft
Query PerformanceData format: CSV
No read-write contention when reading completed Incremental update folder.
Data format: Delta
No read-write contention as Delta supports ACID
Better-read performance
Data format: Delta
No read-write contention as Delta supports ACID
Better-read performance
End-to-End Data FreshnessData freshness (Configurable): 15 minutes – 24 hoursData freshness (Configurable): 15 Minutes – 24 hoursData freshness: ~1 hour
Data Write and Storage CostsStorage + Transaction costStorage + Transaction + Delta conversion – Spark pool costDataverse capacity (entitlement + add-on)
Compute CostDownstream data pipeline costDownstream data pipeline costFabric capacity cost
  • D365FO environment that’s version update 10.0.34 (PU 58) or later
  • Microsoft Power Platform integration for the D365FO environment
  • Enable the Sql row version change tracking configuration key.
  • Access to Azure subscription with the following resource provision
    • Azure Gen 2 Storage account in the same region as the D365FO environment
    • Azure Synapse Analytics workspace in the same region as the D365FO environment
    • Azure Synapse Spark pool with version 3.1 or later

The following access is required to configure the Synapse Link.

More information can be found here

https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-link-select-fno-data
Required PermissionDescription
The user must have the Dataverse system administrator security role for the Dataverse environmentA D365FO environment is connected to a Dataverse environment. To enable Synapse Link, the user should be the administrator of the environment.
The User must have an Azure Data Lake Storage Gen2 account Owner and Storage Blob Data Contributor role accessThe Synapse Link exports the data from Dataverse to a Blob storage using Microservice. The user who is configuring the Synapse Link should have Owner access to manage the Access control of the storage account.
The user must have a Synapse Administrator security role for Synapse WorkspaceThe Synapse Link connects to an Azure Synapse workspace. The user who connects these two services should be the workspace administrator. This Synapse workspace will also create a Spark pool instance to convert files from CSV to Parquet format.
The user should have User Access Administrator and Contribute permission on the resource group.In the resource group where Azure Data Lake and Synapse workspace are present, users should be able to manage these resources, such as creating a Spark pool.

The following links explain how to connect the Azure Synapse Link for Dataverse (which is connected to the D365FO environment) with your Azure Synapse Workspace.

Azure Synapse Link for Dataverse FAQ

https://learn.microsoft.com/en-us/power-apps/maker/data-platform/export-data-lake-faq

Azure Synapse Link for D365FO offers a seamless data export solution for reporting, analysis, and integration. Its scalability ensures efficient handling of growing data volumes without impacting D365 performance. Enhanced performance capabilities enable real-time data processing, analysis and Integration.

Driving Solution Delivery Excellence: Achieving Repeatable, High-Quality Delivery with Custom AI Agent Skills

The “Integration Factory” Model

Integration delivery often fails because it relies on “Hero Culture” and passive documentation that no one reads. To achieve true Solution Delivery Excellence, organizations must shift from bespoke craftsmanship to Industrialized Delivery. By converting tribal knowledge into Custom AI Agent Skills, you transform static wikis into active guardrails. These skills automate high-leverage decisions—like API standards, resilience patterns (circuit breakers/retries), and security audits—ensuring every project is “correct-by-construction.” The result? Faster onboarding, near-zero compliance violations, and a repeatable, high-quality delivery engine.

The Scene

It’s 9 AM on a Monday. A workshop room full of architects, developers, and business analysts is whiteboarding a new integration between a core ERP and a third-party logistics API. Someone asks: “How do we handle retries when the external API goes down?”

In most organizations, what follows is a 45-minute debate. Someone recalls a pattern from a project two years ago. Another pulls up a Confluence page that hasn’t been updated since 2022. A third suggests “doing what we did last time”—but nobody can remember exactly what “last time” looked like.

Now, imagine a different Monday. The architect invokes an AI Agent equipped with the organization’s Custom Integration Skill. Within seconds, the Agent reasons through the requirement and generates a compliant design scaffold. It doesn’t just suggest a “retry”; it injects the company’s standard Exponential Backoff policy, configures a Circuit Breaker to prevent cascading failures, and ensures Idempotency keys are handled per the global standard.

The 45-minute debate becomes a 5-minute review. That is the shift from “Hero Engineering” to Industrialized Delivery.


The Real Problem: Knowledge Exists, but It Doesn’t Travel

Most organizations don’t have a knowledge problem; they have a knowledge distribution problem. Standards live in SharePoint docs that nobody opens. Best practices for security, logging, and error handling are “solved” by senior engineers who then move to the next project, taking that wisdom with them.

The result is a “Decision Tax” paid on every project:

  • Reinventing the Plumbing: Teams spend sprints debating headers and error codes instead of business logic.
  • Inconsistent Quality: One team implements robust PII masking; another silently logs sensitive data.
  • Passive Documentation: A wiki page can’t intervene when a developer is about to make a mistake.

What if your best practices were not documents to be read, but skills to be executed?


Custom AI Agent Skills: Best Practices That Execute

Modern AI Agents support Skills—structured packages of instructions, templates, and reference material that an agent loads on demand. Unlike a static PDF, a Skill is active. It shows up at the point of work, ensuring that the “Golden Path” is the path of least resistance.

By encoding your architectural DNA into these skills, you transform your best practices into executable guardrails.


Where Skills Accelerate the Integration Lifecycle

1. Requirements & Design Workshops

The quality of a workshop shouldn’t depend on which architect is in the room. A Workshop Skill provides the Agent with a structured interview framework.

  • The Guardrail: The Agent ensures non-functional requirements—SLA targets, data residency, and failover expectations—are captured in real-time, generating a standardized requirement document before the meeting ends.

2. The “Correct-by-Design” Skill

Design-time decisions have the highest leverage. A Design Skill encodes your API contract standards (REST/AsyncAPI) and naming conventions.

  • Deep Tech: The Agent automatically enforces Idempotency rules for POST requests and mandates Versioning strategies (e.g., /v2/) so you never break a downstream consumer again.

3. The Resilience & Error Handling Skill

This is where integrations usually fail. A Resilience Skill ensures that “plumbing” is never written from scratch.

  • Deep Tech: The Agent identifies external calls and automatically wraps them in Circuit Breaker logic and Dead-letter queue (DLQ) handling. It ensures every error response follows your central model, making cross-system troubleshooting predictable.

4. The Security & Observability “Sentry”

Security and visibility shouldn’t be bolted on at the end.

  • Deep Tech: A Sentry Skill ensures every flow includes X-Correlation-ID headers for distributed tracing. It automatically injects PII masking logic into logging statements and ensures OAuth2/OpenID Connect flows meet internal InfoSec mandates.

From Tribal Knowledge to Organizational Capability

This approach shifts the definition of quality from the individual to the system.

FeatureTraditional ApproachAgentic Skills Approach
FormatPassive Wikis & PDFsExecutable instructions + Blueprints
DiscoveryEngineer must go find itAgent loads it at the point of need
EnforcementRelies on manual Code ReviewCompliance-by-Construction
Feedback LoopLessons stay within the teamSkill is updated; everyone benefits

The Cultural Shift: Manufacturing Reliability

In this model, the role of the Senior Architect shifts. They no longer spend their days repeating the same advice in different meetings. Instead, they curate the Skills. They define what “Good” looks like once, and the AI Agent scales that expertise across fifty teams simultaneously.

This doesn’t eliminate engineering judgment; it frees it. No one should waste cognitive energy on decisions the organization made three years ago. That energy should go to the unsolved problems—the complex business logic and the novel edge cases.


Measurable Outcomes: The Integration Factory

MetricBefore SkillsAfter Skills
Workshop to Design Doc1–2 WeeksSame Day
Time to first compliant code2–3 SprintsSprint 1
Onboarding to Productivity3 MonthsDays
Security/Standards Violations40% of PR commentsNear Zero
Production TraceabilityHit-or-Miss100% (By Default)

Conclusion: Build the Machine that Builds the Work

The organizations that scale best are not those with the most “hero” individuals. They are those that have systematized what their heroes know.

By capturing your design standards, resilience patterns, and security blueprints into Custom AI Agent Skills, you move from fragmented execution to engineered delivery excellence. You aren’t just building integrations anymore; you’ve built an Integration Factory.