I am looking for the correct destination to place a general trigger in the code in order to send data changes on table records (INSERT / UPDATE / DELETE) to an external software as soon as the record / the changes are available to be called from external via ODATA (Entity).
We tried to implement the "Inserted" / "Updated" / "Deleted" Eventhandler methods like this.
Within the "...DirectWriterHandler" the information about the data change is sent to a third party application. As soon as the information arrives in the third party application, the application tries to read the received record via ODATA.
If this happens in a large TTSBEGIN / TTSCOMMIT the third party app is requesting the record prior to the final TTSCOMMIT and therefore the record could not be found or the changes made to the record are not available via ODATA.
Is there any other point where I could put the "...DirectWriterHandler" send code so it is sent when the changes are available via ODATA?
The information about business events that occurred in the transaction are stored in BusinessEventsCDSComponentChangeRequest table. The flag is just an optimization that allows skipping steps like querying BusinessEventsCDSComponentChangeRequest table; the flag itself hold no information about which events should be raised.
Looking at code, it seems that BusinessEventsCDSSyncHelper processes all records in BusinessEventsCDSComponentChangeRequest table that aren't locked.
Note that if you have inserts in three tables in the same transaction, they're all committed together. A transaction is an atomic operation, you can't say that a third of a transaction is committed and two aren't. Either all changes are committed or none is.
@Martin Dráb
In BusinessEventsCDSSyncHelper.Application_ttsNotifyCommit
we find the registration of a delegate for a cimmit event. Is it correct that the delegate is fired by any commit?
How is it possible to detect where this commit belongs to? In the example they work with a global cache variable 'changeRequestsExistFlag' which needs to be set at the starting point.
When the need is to synchronize 50 tables and about 3 tables have an insert before a commit is returned, then we have 3 variables set, one for each of the tables. When the first commit is fired how do we know to which table the commit belongs? It can be any of the three and the other 2 are not ready yet for the farther processing...
You aren't right when saying that there always will be a transaction around insert(). It's true for update() and delete(), but not insert(). If there isn't a transaction in code calling insert() but there is one in insert(), a CoC extension won't be in a transaction but an event handler will be.
If you want to see examples of code reacting on transaction commit, look at references of Application.onTtsNotifyCommit. For example, you can see how BusinessEventsCDSSyncHelper.Application_ttsNotifyCommit starts asynchronous message sending just after committing a transaction. It doesn't do it inside the transaction to avoid performance issues and dirty reads and it does it asynchronously not to block user processes. Inside the transaction, it merely remembers what should be processed afterwards, it doesn't do the actually processing before the transaction gets committed.
Thanks Martin for your assessment and the explanations of the problems that could arrise. I will realy take this into consideration (export to data lake).
Only for the explanation of the behaviour of EventHandler vs. COC (mentioned by Layan): do you know if it behaves differently regarding the TTSBEGIN / TTSCOMMIT? If not, is there any other possibilty to get an "Event" when the record is finally commited to the DB?
Suggested answer
Martin Dráb230,370Most Valuable Professional
on at
This design is guaranteed to cause performance and data consistency issues and it may block the whole ERP system.
Not only that the records haven't been changed yet, but they may never be (e.g. an error occur and the transaction is rolled back, although the external system was already informed about the change).
Also, doing things like web service calls in database transactions is a bad idea. Imagine that the web service calls can't be reached at all and the call fails. Then users can't insert any customers, for example. Or maybe it works but it's very slow because of a network issue. Then your code slows down the system, transactions hold locks for much longer (leading to further blocking issues or deadlocks) and so on.
I suggest you utilize the export to data lake instead of trying to develop something similar (but much worse) by yourself.
thanks for the hint. Are you sure that the EventHandler and the extension "COC" behave different regarding the TTSCOMMIT?
I would assume putting TTSGEBIN / TTSCOMMT around the "next" call will not change the behaviour if it is sourrounded by another TTSBEGIN / TTSCOMMIT, which will always be the case because every data change (INSERT / UPDATE / DELETE) in x++ must be handled with TTSBEGIN / TTSCOMMIT. Therefore records or changes will be available after the final TTSCOMMIT.
So you mean the data being sent is the old data and not the new data?
I would advice to use Extensions (chain of command) instead of event handlers.
Also using extensions you could put ttsbegin before the next and ttscommit after ur call, to make sure ur new changes will be sent
ttsbegin;
next():
DirectWriteHandler()
ttscommit;
Thanks,
Layan Jweihan
Please mark this answer as "Verified" if it solved your issue. In order to help others who will face a similar issue in the future
Under review
Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.