X++ is designed for concurrent work loads - enabling multiple users performing their operations at the same time. Sometimes such operations are in conflict, and the system has to detect, recover and proceed. Data inconsistencies should not be allowed.
Insert
For insert operations consistency is ensure by uniqueness constraints in SQL. If a user tries to insert a record that already exists the transaction will be aborted, and the user will be presented an error message.
Update
Update operations are a trickier. When updating a record, the system will ensure that no one else updated or deleted the record meanwhile. This is implemented using the RecVersion field. When a record is updated, the same record must still exist with the same RecVersion in SQL as the one selected. If not, the transaction is aborted.
Delete
Delete operations are the most tricky. As described above the system will prevent against a simultaneous delete and update operation. However; by default it does not prevent against 2 simultaneous delete operations. After all; if two users both decides to delete the same record simultaneously - why fail one of them?
Here is an overview:
User 1 | User 2 | Consistency |
Insert | Insert | Guaranteed |
Update | Update | Guaranteed |
Update | Delete | Guaranteed |
Delete | Update | Guaranteed |
Delete | Delete | Not guaranteed (unless enabled) |
If there is no X++ logic in the delete methods on the table (or any of the downstream tables where delete actions are fired) then this is perfectly ok.
But that is not always the case. Consider this example:
public MyLines extends Common
{
str headerId;
public void delete()
{
ttsbegin;
super();
MyHeader header = MyHeader::find(this.headerId, true);
header.numberOfLines--;
header.update();
ttscommit;
}
}
If 2 users attempt to delete the same line at the same time, then both of them will end up decrementing the header, resulting in a header with negative number of lines. The first thread will hold a delete-lock on the lines table blocking the second thread on the super() statement. Once thread 1 commits, the second thread will attempt the same delete (which does nothing), then select the header that is already decremented and decrement it again.
This is the default behavior.
The solution
If you don't like it, do like me and override the new method PU35 method: shouldThrowExceptionOnZeroDelete(). Always. No second thoughts. On new tables, on existing tables, on regular tables, on tmp tables (why not?).
If this method returns true the data base layer will throw an UpdateConflict exception when attempting to delete a record that is no longer there.
Small caution
When blindly making the shouldThrowExceptionOnZeroDelete() return true in our application a few product bugs surfaced. If the same thread attempts to delete the same record twice it will start failing now. A typical product bug where this happens is if a delete action is expressed both declaratively in meta data, and imperatively as code. Still; these bugs are much better (as data remain consistent) than the data inconsistencies that are the alternative.
Why not change the default behavior
The answer is really simple: To stay backwards compatibility. For this one we have an opt-in model, there is no other way while honoring our promise of backwards compatibility.
THIS POST IS PROVIDED AS-IS; AND CONFERS NO RIGHTS
*This post is locked for comments