1. Why batch jobs might take time to look into the table?
2. when you say all batch servers are busy, you mean all threads are running?
3. Why the created batchJob history createdDateTime was equal to the endTime instead of the Actual startTime?
As I've mentioned, I need to fix this delay, as we want the data in less than one minute.
To explain what I'm doing in general:
Currently, when a button is clicked to post let's say for example a journal, when the posting is done and the dialog closes, I call a sysOperationFramework service class with reliable async execution mode. What the service does, is that inserts into a custom table that works as change tracking, and if a condition is met it exports the composite entity, then send the exported file to azure blob(in addition to few logic) as such journals are needed in real time. If the condition is not met, the service will stick to just inserting into the table, and later on recurring integration will grab those journals.
This is working quickly in my dev box taking between 10 -15 seconds.
Now in prod, we are having a performance issue where people are saying that it takes more than one minute for the file to reach the customer.
I've looked at few examples and indeed i can see that the difference between the modifiedDateTime for the posting of the journal And the createdDateTime for inserting in my custom table which is done inside the batch after the posting, is sometimes more than 2 minutes!! And there are cases where it's quick.
Journal Table
| JournalId |
CreatedDateTime |
ModifiedDateTime |
| ####### |
8/10/2023 2:00:8 PM |
8/10/2023 2:00:15 PM |
custom Table
| CreatedDateTime |
| 8/10/2023 2:01:14 |
BatchJobHistory
| Actual start time |
EndTime |
CreatedDateTime |
| 8/10/2023 2:01:14 |
8/10/2023 2:00:24 PM |
8/10/2023 2:00:24 PM |
As you can see the custom table got inserted at a time similar to the batch job history "start time" -- so it seems the issue is when the batch actually started
what I did, is that I tried to move this batchJob to critical batch job group (it's the only one inside this group) -- and still the performance is the same.
4. so is the delay really happening because of the batch job threads limit??
5. I'm also thinking of a suggestion, and I would like to see if you think this might improve the case.
So currently as I said, this custom table gets inserted with all types of needed journals (Around 14 types), and if the journal is needed in real time, it continues to export the entity then send to azure blob. If it's not needed in real time, it just inserts in the table and wait for recurring job to grab them.
These journals gets created alot, so we can find more than 10 journals per 1 min.
If I divide these 14 types of journals to two batches, i mean i can create to sysOperationFramework services, where one inserts record to custom table and that's it
and the other one inserts to customTable, exports to composite entity then sendToAzure -- and then make this only as critical batch group
5a. would diving them to two batches enhance the performance issue I'm facing?