A strange tale of entity staging table relations
Hello Community,
I have decided to make a conscious effort to post more frequently about weird x++, infrastructure, and integration problems I come across, as technical folks seem to be generally interested whenever I share my tales of strangeness in D365 "ERP" (<--- trying to future-proof my product references as Microsoft is probably going to change the name of the products again before I complete this blog post). I'll be trying to make a post every other week, but it really depends on how frequently I have a weird problem land on my desk.
Today I have something I'm guessing not many of you have seen, or have even had to care about, but hopefully sharing this knowledge proves useful to at least a few of you out there.
The setup: One of my colleagues cloned a base data entity (naughty naughty, I know - but I think those "in the know" are aware that base artifact cloning in general is still happening surprisingly frequently), and in the clone they added some new custom fields to the entity, as well as a new field in the entity key to increase the granularity of the key. As this particular entity was 'Data Management Enabled' and was intended to be utilized primarily through data management, they also added these fields to the cloned staging table.
The problem: In import testing of the entity, it was discovered that in scenarios where records that would be considered 'new' as measured against the new higher-granularity entity key were being treated as 'updates' by the system as it processed the imports. So in places where we would've expected an 'import', we were getting an 'update', which was failing, as the record(s) didn't exist yet.
The cause: In this instance we were fortunate enough to have an error, so I was able to debug the error without knowing exactly where it was being thrown from. Once the debugger paused on the error I could observe the call stack to get a general sense of how we got from "importing data" to "failure to update". I honed in on the DMFEntityWriter class, and this being an upsert error, I was drawn to this interesting query code in its 'writeV2' method:
The "backbone" query of this whole upsert process defines the staging table as its root datasource, then on lines 1802 through 1807 the system code outer joins forupdate to the entity itself, using the default relations on the staging table. When I popped the hood on the state of the query after line 1807 executed, I got a pretty good clue of what the problem was: the data entity was indeed being outer joined to the staging table, but it was only joining on the original entity key's fields (i.e. missing that additional granularity-increasing field). This caused the system to outer-join forupdate an inappropriately matching entity record, which led to downstream system code eventually trying to update an underlying record that didn't actually exist:
, which led to our update error. In going back to the staging table to try to untangle the relation problem, I learned something I had never noticed before, which is that most entity staging tables have a 'DataEntity' relation defined, and that for hopefully now obvious reasons that table relation is critical to the system code's decision to update vs. insert a record when the staging table is involved in the data movement.
The fix: Adding the new entity key field to the entity relation fixed the issue, as the outer join code I mentioned above did not find any matches once the table relation was more specific, thus the system code didn't see anything to update and treated the import as an insert, like we would've expected to happen with "new" data.
The lesson(s) learned: 1) Pay attention to the 'DataEntity' relation on staging tables, it matters!; 2) Don't clone entities if you can avoid it; 3) If you do clone an entity, where possible get into the habit of using "regenerate staging" to clean out any stale keys/relations/etc. that may have changed after you made updates to the entity itself; 4) If you can't "regenerate staging" due to base code on the staging table that you want to keep (as was my colleague's case on this entity), make sure you update the 'DataEntity' relation to completely match the entity key!; 5) (bonus) Containers are still very much alive and well, especially in the DMF code.
Thank you for following me through this first bite-sized "dynamics journey" post - hopefully you got some use out of it!
Comments
-
Right Brian, Microsoft expects we should be extending entities wherever possible. A few benefits of *not* cloning: if Microsoft updates the entity, you get the update for "free"; the risk of "crossing the wires" (i.e. forgetting to change a name or reference in the clone and adversely affecting both your entity and the original entity) is almost nonexistent; and any external references to the entity in the rest of the codebase will still work as expected. Having said that .. even certain seemingly "small" things (such as exposing an existing entity on the OData endpoint) can't be done via extension, and recreating the sometimes complex dependency relationships between entity data sources is almost always more work than starting with something Microsoft already built, so you do still see clones all over the place.
-

Like
Report
*This post is locked for comments