RE: SQL WennSoft GP questions
Hi,
Like the others, it's a big question to answer on the forum. I can only add things that have made a remarkable difference in performance for some of my larger clients. When I say 'larger', I mean a larger database. In my world that is something in the realm of 100 gig for the .mdf file.
What turned us from 'take a coffee break' to 'this works' was the distribution of the data. We added more hard drives to the installation. Not more 'partitions' more physical drives. We separated each of these onto a different physical drive, or drive array:
Temp db
swap file
application
transaction log (.ldf)
database file (.mdf)
operating system (windows)
For larger databases, we split the .mdf file into multiple files in SQL.
Also, look at your inventory valuation method (if you use inventory). Average perpetual will take forever (and I mean forever) to post if you have a heavy transaction load and allow your inventory to go negative. The system will recalculate the cost at every line. If you are looking for a level cost, consider changing to standard cost. The system will speed up so much, you will think it's not really processing the transactions.
Look at what else is running on the SQL server and the amount of memory and priority you give SQL vs other tasks.
Consider using a process server. This won't 'speed up' the actual processing, but it will throw the job of doing the processing off of your workstations so that the workstation can continue on without waiting for whatever you're doing to complete.
If you have millions of records in your database, you might consider archiving or deleting old data to speed up your searches, etc.
I hope this at least gives someone a few tips on how to speed up a slow production.
Kind regards,
Leslie