web
You’re offline. This is a read only version of the page.
close
Skip to main content

Notifications

Announcements

Community site session details

Community site session details

Session Id :
Finance | Project Operations, Human Resources, ...
Unanswered

ADF cannot ingest D365 FnO data from Azure Synapse Link

(0) ShareShare
ReportReport
Posted on by

Hi Microsoft Support Team,

We are currently using the Export to Data Lake feature to ingest Dynamics 365 Finance & Operations (FnO) data into Azure Data Lake Storage Gen2 (ADLS2) and subsequently copy the data into Azure SQL Database using Azure Data Factory (ADF).

As Export to Data Lake is being deprecated from 1 November 2024, we have started migrating to Azure Synapse Link for Dataverse.

We have successfully configured Azure Synapse Link for Dataverse for D365 Finance & Operations in our UAT environment, as shown in below screenshot. The data is being replicated to ADLS Gen2 as expected.

User's image

However, we are facing challenges ingesting this data into Azure SQL Database using ADF or Synapse pipelines, primarily due to the new partitioned folder structure used by Azure Synapse Link, as shown in below screenshot.

User's image

Observations & Issues

  1. Folder Structure Difference

    • With Export to Data Lake, each entity was written to a predictable, non-partitioned folder structure, making it straightforward to define absolute paths in ADF.

    • With Azure Synapse Link, FnO tables are written in a partitioned Delta/Parquet structure (e.g. PartitionId=2021, PartitionId=2022, etc.), as shown in Screenshot 2.

    • There is no single absolute path for an entity’s data files.

  2. ADF / Synapse Data Flow Issue

    • We created ADF/Synapse pipelines (screenshots 3 and 4) using:

      • ADLS Gen2 as source

      • Common Data Model / Delta / Parquet formats

      • Recursive folder and wildcard options

    • However:

      • Data Preview returns no data

      • Pipelines fail or do not load records into Azure SQL Database

      • Errors occur related to missing partition metadata or empty datasets

  3. Synapse SQL vs ADF Gap

    • In Synapse Analytics workspace, we can:

      • See FnO tables

      • Query the data successfully using serverless SQL

    • But it is unclear how to operationalize this into an ADF/Synapse pipeline activity to reliably load data into Azure SQL Database.

 

 

Business Impact

We currently have multiple production pipelines that:

  • Copy data from ADLS Gen2 (Export to Data Lake)

  • Load transformed data into Azure SQL Database

Migrating from Export to Data Lake → Azure Synapse Link requires:

  • Refactoring all existing pipelines

  • Redefining ingestion logic for partitioned Delta/Parquet data

  • Clear Microsoft-recommended patterns for ADF-based ingestion

At present, there is no clear or consistent guidance on how to replace Export to Data Lake ingestion pipelines with Azure Synapse Link–based pipelines for FnO data.

 

 

Assistance Requested

Could you please provide:

  1. Microsoft-recommended pattern to ingest FnO data from Azure Synapse Link (ADLS Gen2) into Azure SQL Database using:

    • Azure Data Factory and/or

    • Synapse pipelines

  2. Clarification on:

    • How ADF should handle partitioned Delta/Parquet FnO tables

    • Whether serverless SQL external tables/views are the intended ingestion layer before copying to Azure SQL

    • Any reference architectures or samples for this migration scenario

  3. Confirmation whether:

    • The current Microsoft Learn documentation for ADF ingestion applies to FnO Synapse Link outputs, or

    • A different approach is expected post Export to Data Lake deprecation

 

 

We appreciate your guidance, as this migration is critical for maintaining continuity when Export to Data Lake is retired.

Thank you for your support.

Kind regards,
Raheel Islam

Categories:
I have the same question (0)
  • CU09010826-0 Profile Picture
    4 on at
    Hello,
    The new Delta format relies on a transaction log rather than just static files, your old "wildcard" folder paths are failing because they ignore the metadata telling ADF which files are actually valid. The most reliable "human-friendly" fix is to stop pointing ADF directly at the storage folders and instead point it at the Synapse Serverless SQL views that are automatically created when you set up the link. These views act like a standard SQL table, flattening all those messy partition folders into a single readable object, which you can then copy into your Azure SQL Database using a standard Copy Activity without having to worry about the underlying Delta/Parquet logic.

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

News and Announcements

Season of Giving Solutions is Here!

Quick Links

Responsible AI policies

As AI tools become more common, we’re introducing a Responsible AI Use…

Neeraj Kumar – Community Spotlight

We are honored to recognize Neeraj Kumar as our Community Spotlight honoree for…

Leaderboard > Finance | Project Operations, Human Resources, AX, GP, SL

#1
Abhilash Warrier Profile Picture

Abhilash Warrier 679 Super User 2025 Season 2

#2
André Arnaud de Calavon Profile Picture

André Arnaud de Cal... 421 Super User 2025 Season 2

#3
Martin Dráb Profile Picture

Martin Dráb 282 Most Valuable Professional

Last 30 days Overall leaderboard

Product updates

Dynamics 365 release plans