In Data management, I configured an Azure SQL DB (AzSQLDB) data source. I created a custom data entity, successfully published it to AzSQLDB, and verified that the staging table was created in SQL.
Change tracking is enabled on all tables in the entity, as the requirement is to export delta changes to AzSQLDB at 10-minute intervals. This is on a Tier-2 Standard acceptance test environment.
I then:
- Created an Export project in DMF
- Configured a Recurring data job from the Export project (including Application ID / authorization policy, etc.)
As expected, two batch jobs are created:
- Batch job for activity <XYZ>
- Batch job for monitoring activity – <XYZ>
Issue observed
The batch job executes multiple times and shows:
Status: Ended
No errors reported
However:
The AzSQLDB staging table remains empty
No data is exported
When I manually run the same export using Export now on the Export project and review View execution log, I see errors such as:
“The record already exists”
(This is expected in this environment due to existing test data.)
My questions
- Why does the recurring batch job not surface any errors, even though the export clearly fails when run manually?
- Shouldn’t the batch job history show Error instead of Ended if records were not exported successfully?
- Is there a difference in how errors are handled or logged between:
- Manual export (Export now)
- Recurring export batch execution?
Is there a more reliable way to detect failed or partially failed recurring exports, as the current batch history gives a false sense that everything is working?
At the moment, the batch job history appears “green”, while in reality the export is failing silently.
Any insights into DMF’s recurring export error handling or logging behaviour would be appreciated.