Hi,
Newbie here and a total novice so please bear with me.
We just had an upgrade done from Navision 5 to Dynamics NAV 2009 R2 running the Classic client. We are in a workgroup envrionment so there are no group policies or anything like that in place. Since the upgrade, NAV as a whole has been running a little slower, but our reports are running VERY slow. It is taking up to 5 minutes to produce a simple Aged A/P or A/R report when before it would run in about 30 seconds. We also run Jet Reports and have the same issue. The guy that did our install is great with database work, but not so much when it comes to things like this. He thought it may be a hardware issue, but our server is only a couple years old and when I check the resources being used while running reports it's not struggling at all. I am only running 4 GB or RAM in it because it's running 32-bit Windows 2003 but I don't know if that's the issue. So my question is, are there any simple things I can check to try to increase the speed? I don't know if there are some settings I can look at or possibly a hotfix or something available. Here is the current build: Version HCM 6.00.04.01, US Dynamics NAV 6.0 R2 (6.00.33413) If I can provide any more details that would help I'll give it my best shot, but I'm not very knowledgeable with this software at all.
*This post is locked for comments
Happy to help, glad you got it sorted
My suggestion is when you have some budget an time you can look at adding additional drives to the servers hard drive array, although the database now has a gig of ram available to it, if the database is bigger than that it will be swapping out from the HDD
The more drives you put into the array the faster the server will get, so extremely good value for money adding additional drives to the array, I might be wrong here, but the splitting of files is to use different drive sets to handle the parts of the database, not sure of the impact of that but my suggestion would be to initially just increase the drives in the array as this will be the simplest performance boost
Cheers
Nev
Nev, I went ahead moved the cache up as suggested and that helped tremendously! I had also put a ticket into our Microsoft partner and heard back from her shortly after I made the adjustment and she confirmed this is the first thing they look at when speed issues are present. She also suggested that with the native database I split the database up into 3 or 4 separate files rather than just using a single larger file. Our database is currently 6GB, she suggested using four 2GB database files instead. I haven't tried this yet as we seem to be running much better now so I'd rather leave well enough alone for now.
Thank you for your help!
Hi
Yep, basically what it does is loads the information from the HDD into RAM, so that when stuff runs it runs from RAM not the slow (By comparison) hard drives
Memory vs Hard drives (I put this comparison together a few years ago so everything is faster but the concepts are the same as both the drives are faster but so is RAM in the same ratio)
-The slowest point on a DB server is drive access, a 15K drive offers 125 Meg per second of sustained transfer and a average read write of 4 ms
-This compared to server memory with a read/write of 8Gigs per second & access times in the region of 0.001 – 0.010 ms
So ram is up to 4000 times faster on access and 64 times faster to read from
Looking at these numbers you can understand why if you had your previous cache set to a gig and it is now gone why your performance has plummeted
Now onto data loss:
Yes, data is not committed immediately into the database,
The person you are talking to is incorrect, the system updates the disk after a transaction is committed so worst case you can loose the last transaction or 2 that is currently in process should there be a failure, I dug this up to confirm it support.microsoft.com/.../874296
"The modified data will be written to the DBMS cache, and not to the disk. When this client completes the write transaction (that is, commits the changes), the data in the cache that was modified during the transaction will be written to the disk. The cache is then said to be flushed. The DBMS cache always contains the most recently used data. The cache is continually updated with the relevant data from the database."
What you need to worry about with the classic database is that it cannot automatically recover from a power failure during a write to the drives and this will happen regardless of the size of your cache, you just need to ensure you are on a UPS and making regular backups
So pump the cache up to 1024 Meg because it has to be there for your system to run properly (And must have been there before the upgrade or you wouldn't have seen the slowdown) and in terms of system stability it will make you no worse off
You should also seriously consider moving to SQL to get 64bit, better memory allocation, automated backups, automatic recovery from most database write failures (Whereas the classic db server just ends up with a corrupt database), automated maintenance.....etc
I asked the guy who did our upgrade about this earlier and this is his reply. Would this not be true?
"The danger of having the DBMS cache setting this high all the time is--data input accumulates in the DBMS cache and no data is committed to the database until the DBMS cache is filled. Only then is the data committed. Prior to the data being committed the data input can be seen in the database but until it is committed a sudden unexpected shutdown of the server would cause a rollback of the database to the state at which the last cache commit occurred. Any data waiting to be committed would be lost and would have to be reentered. This could mean days, even weeks of lost data"
Hi
On the server you need to push the server cache all the way up to its max, 40 megs will still have no affect on performance
The classic DB service is 32 bit so it cant handle more than 1 gig of ram (Which is why you should think about moving to SQL)
so change the cache on the server to 1024 000 the restart the server
The next bit is that the cache gets filled when the report or form or page runs, so run it a 3 times and then see what the run speed is on the 4th run
The object cache on your local machine will help but but you wont seen any dramatic improvements from that, i would say 50-100 megs is plenty
Cheers
Nev
Tried bumping up the DBMS cache on the server from 10MB to 40MB with no improvement. I also tried moving my system's Object cache all the way up to the 1Gig Max since I am running with 32GB or RAM but that didn't do anything either.
In order to do the update they will have uninstalled the server database component for version 5 and installed the 2009 R2 component
After this change i suspect that they have not increased the default Cache size of the server and the defaults are pretty low and will cause this slow down (I have had this at a client before)
Sorting it out is pretty simple, you can find all the information on the NAV install CD under
Documentation\PFiles\Microsoft Dynamics NAV\60\Documentation\Install Guides\
nav_install.chm
Then go into Configuring -> Configuring Classic Database Server, For all the info
Shout if you run into any issues and i can assist as i have sorted out the same issue before
Cheers
Nev
Hi Nev,
We did not convert the database and are still running with a native.
Hi
Did they do a conversion from native database format to SQL server at the same time?
Cheers
Nev
Stay up to date on forum activity by subscribing. You can also customize your in-app and email Notification settings across all subscriptions.
André Arnaud de Cal... 291,240 Super User 2024 Season 2
Martin Dráb 230,149 Most Valuable Professional
nmaenpaa 101,156