There is no "set" range or guidelines around this for Dynamics GP.
Microsoft Dynamics GP was designed for small to medium business and not for large business volumes, and that the posting processes have become more complex over the years with new features, ISV/ third party add-on's, etc.
In general the posting process Microsoft Dynamics GP moves a copy of the transaction data from WORK tables in SQL to the OPEN or HISTORY tables for that module. This is consistent with almost every module. (GL, Sales, Purchasing, Inventory, Payables, Receivables, etc.) The move of this data often occurs while the Posting Journals are printing.
There can be a time where the transactions exist in both WORK and OPEN or HISTORY tables and this is to ensure that we don’t delete your WORK tables until we have confirmed they moved over to help prevent data loss when possible. Then once the Posting Journals finish printing it removes the WORK records as it has confirmed everything moved over to OPEN or HISTORY.
When the Posting Journals are being printed there are many temporary files generated that hold information for the report(s) being compiled in the user's %TEMP% folder (C:\Users\XXXXX\AppData\Local\Temp). If you have a very large amount of data or a large number of transactions in a single batch being posted, these files grow which can cause issues with the posting process.
This isn’t something Microsoft Dynamics GP can control, as some machines will restrict how large files can get in the temp folder, and even some Anti-virus applications will see this this flood of files or growing files as a thread and lock them to scan them. This is when we see posting interrupts most commonly.
Another factor to consider is if you have shared forms or reports dictionaries. Larger batches take longer to post and therefore can be more susceptible to network drops or disconnects to those shared dictionary locations even if the specific report being printed is not modified.
One thing to seriously consider is making your batches smaller, if something happens to the batch it is easier to recover from then if you had a large batch.
Let's say that I have a batch with 1,000 transactions in it. You have to also think about how many line items and/or distribution lines each of those transactions have. Every company is different, and the transaction type can make a difference too, but some will have 10+ distribution lines if they have taxes and commissions or discounts but even a safe estimate would be 6 distributions per transaction. So your Posting Journals are trying to compile 1,000 X 6 distribution lines to print on the report which would be 6,000 distribution lines, and that all adds up and makes the temporary files and tables grow.
Ultimately, every business and environment is going to be slightly different in how it performs based on many factors, but Microsoft Dynamics GP was designed for small to medium business. We also do not post a specific number due to every environment, module, and transaction type can vary as well.
Over the years things will change, SQL actually gets better and better all the time, so even though GP may not change, your SQL will perform the process faster for you as you upgrade SQL to a new version.
Some general recommendations for posting larger batches if they are failing often would be the following:
-Try posting these larger batches directly on the SQL server on an install that doesn't have any shared forms or reports.
-Try turning off Posting Journals for the transaction types that you are importing. (Administration >> Setup >> Posting >> Posting) You can always reprint them later if needed.
-Try posting 'To' the General Ledger instead of 'Through' so that you separate the 2 batch posting processes. (Administration >> Setup >> Posting >> Posting)
The best solution would be reducing the size of your batches if you are having issues.
Thanks
Terry Heley
Microsoft