
Hi Microsoft Support Team,
We are currently using the Export to Data Lake feature to ingest Dynamics 365 Finance & Operations (FnO) data into Azure Data Lake Storage Gen2 (ADLS2) and subsequently copy the data into Azure SQL Database using Azure Data Factory (ADF).
As Export to Data Lake is being deprecated from 1 November 2024, we have started migrating to Azure Synapse Link for Dataverse.
We have successfully configured Azure Synapse Link for Dataverse for D365 Finance & Operations in our UAT environment, as shown in below screenshot. The data is being replicated to ADLS Gen2 as expected.
However, we are facing challenges ingesting this data into Azure SQL Database using ADF or Synapse pipelines, primarily due to the new partitioned folder structure used by Azure Synapse Link, as shown in below screenshot.
Folder Structure Difference
With Export to Data Lake, each entity was written to a predictable, non-partitioned folder structure, making it straightforward to define absolute paths in ADF.
With Azure Synapse Link, FnO tables are written in a partitioned Delta/Parquet structure (e.g. PartitionId=2021, PartitionId=2022, etc.), as shown in Screenshot 2.
There is no single absolute path for an entity’s data files.
ADF / Synapse Data Flow Issue
We created ADF/Synapse pipelines (screenshots 3 and 4) using:
ADLS Gen2 as source
Common Data Model / Delta / Parquet formats
Recursive folder and wildcard options
However:
Data Preview returns no data
Pipelines fail or do not load records into Azure SQL Database
Errors occur related to missing partition metadata or empty datasets
Synapse SQL vs ADF Gap
In Synapse Analytics workspace, we can:
See FnO tables
Query the data successfully using serverless SQL
But it is unclear how to operationalize this into an ADF/Synapse pipeline activity to reliably load data into Azure SQL Database.
We currently have multiple production pipelines that:
Copy data from ADLS Gen2 (Export to Data Lake)
Load transformed data into Azure SQL Database
Migrating from Export to Data Lake → Azure Synapse Link requires:
Refactoring all existing pipelines
Redefining ingestion logic for partitioned Delta/Parquet data
Clear Microsoft-recommended patterns for ADF-based ingestion
At present, there is no clear or consistent guidance on how to replace Export to Data Lake ingestion pipelines with Azure Synapse Link–based pipelines for FnO data.
Could you please provide:
Microsoft-recommended pattern to ingest FnO data from Azure Synapse Link (ADLS Gen2) into Azure SQL Database using:
Azure Data Factory and/or
Synapse pipelines
Clarification on:
How ADF should handle partitioned Delta/Parquet FnO tables
Whether serverless SQL external tables/views are the intended ingestion layer before copying to Azure SQL
Any reference architectures or samples for this migration scenario
Confirmation whether:
The current Microsoft Learn documentation for ADF ingestion applies to FnO Synapse Link outputs, or
A different approach is expected post Export to Data Lake deprecation
We appreciate your guidance, as this migration is critical for maintaining continuity when Export to Data Lake is retired.
Thank you for your support.
Kind regards,
Raheel Islam