I am a database administrator who supports a SQL 2016 instance for an on-prem Business Central install.
Several years ago I supported a few SQL instances for AX and GP installations at a number of client sites, but they were always handed to me configured by the ERP consultant with the relevant settings and maintenance jobs configured to the specific needs of the application. I was vaguely aware of several trace flags that were implemented at that time and very low fill factor specified, but otherwise my responsibility was for the most part only to ensure the necessary jobs ran and the database was backed up. Performance problems were always resolved by the consultant.
I have been looking now for specific performance configurations for BC and haven't found much on the SQL side.
A recent issue has been a lot of blocking almost exclusively through update locks with a small amount of contention in shared (read) locks.
Is it the case with BC that indexes must be managed by BC or can I create them myself? If I remember correctly, indexing in a SQL instance serving AX or GP caused some sort of problem. When I review the indexes that are in database currently, all of them look like they were created by some sort of ORM based on the naming.
I have almost 30 tables that have between 10 and 17 indexes about 2/3 of which have 5-9 key columns. I would normally try very hard to not put even 7-8 indexes on a table in a typical mixed OLTP/OLAP application database and keep it at most 4-5 key columns unless there was a really compelling performance reason to risk the insert performance, let alone in an application that is extremely sensitive to insert update and delete performance. I can see from index usage statistics that most of these indexes either have never been used or are rarely used. could these indexes be contributing to some of the blocking?
Should I be rebuilding indexes with lots of empty pages? SQL generally doesn't benefit from leaving empty pages anything past SQL 2005 but am curious if there is something particular about the way BC writes to SQL that could benefit from it.
Another issue I am seeing is that some of the queries BC generates are very inefficient. There are queries that scan millions of records then filter them down to as little as a single result. I have a lot of select statements that are taking out update locks that cause blocking. I am not sure what my question about that is, other than asking for general advisement on query tuning in BC.
We recently did a big posting of several thousand invoices and even after hours when not a single other user was in the system we still received heavy blocking.
There is no memory, cpu or disk pressure reported by the SQL server, all of the waits are waiting on locks.
Sorry if this was rambling, I just don't know what I need to ask or look for in this.
Awesome. Thanks CreateNonClusteredIndex this is really helpful
To follow up on my previous comments after working it for about 2.5 months:
I would now strongly recommend, even if just for the purpose of performance tuning, to disable automatic statistics updates and then implement a nightly statistics update job with full scan. When statistics were allowed to auto update, they would update so frequently that they would continuously clear the plan cache, then would obscure the full picture of what was going on over time. At least in my environment, the transactions in dynamics would trickle in most of the year and then burst to high numbers of transactions following business events over the year. Once all that is sorted out you could turn it back on, but if its blowing away the plan cache dozens of times per day, it will be difficult to see the problems.
After implementing this, it became clear that most of the performance problems were not from a small number of extremely inefficient queries, but from a high number of somewhat efficient queries doing relatively small range scans in high volumes. Most of the blocking occurred on S locks. Without disabling the automatic statistics update, it was not immediately apparent that queries that ran dozens or hundreds of times per minute for less than 200 milliseconds in a range scan, were causing most of the blocking.
75% to 90% of these queries in my case were from customizations that didn't query particularly efficiently using the indexes available that could be tuned up some but correlating what is slow in SQL to the code in AL hasn't been practical yet.
All you need to address this issue is SQL Mantra Tools (https://sqlmantratools.com/) We used in our organisation and has helped us to optimise the database size and helped us to identify the performance bottleneck pinpointing the BC (AL) code weakness in our system. Please contact them and they will help you with their software, which is specifically designed to work with NAV and BC.
Thank you so much for posting this. This is great
Foe the topic on automatic statistics I agree that it should be enabled. There used to be and might still be different opinions around this. But you are right that the latest recommendation from Microsoft that I have read is to keep it on.
I have seen some situations where it was not so smart to have it enabled. That is if you have a very large table where the primary key is very little differentiated. F.ex. if you have a table where the PK starts with a bolean and there are many millions of records in the table.
The statistics could make the SQL do a tablescan instad of using and index at all. I had one implementation with a NAV database of around 500GB that suffered from statistics being updated too frequently on that table.
Luckily SQL server have improved significantly and so have NAV during those years.
Sure, so:
Trace flag 845 is no longer required in SQL 2014+ which formerly enabled SQL to manage its memory better when under pressure, but it always does it now
Trace flag 4136 as I understand is generally recommended against by dynamics in modern SQL unless addressing a specific issue of radically skewed data distribution
TF 1117 and 1118 are the default behavior in SQL 2016+
For the recommendation to disable automatic statistics creation, it should probably be enabled. I don't know if there were specific issues of statistics creation in previous versions of SQL, but lacking statistics for individual columns in multi-column indexes in a pretty big problem.
Dynamics also recommended automatic statistics be enabled where the guide recommended it usually be disabled and replaced with nightly statistics update jobs. Preliminarily, in my environment it seems that disabling it and running nightly full scan stats update could possibly be better than auto-update.
There is a snippet of a maintenance plan showing an index rebuild followed by a statistics update - Index rebuilds update statistics as they rebuild and the step is not necessary for 2012+. (It may have been necessary for previous versions; I only have minimal experience with SQL 7-2008)
Regular Index rebuilds in 2008+ are much less necessary than they were previously, instead statistics updates provide most of the performance boost.
It seemed like there were more than this, but it may have been from other sources I don't have in an open window anymore
Thanks for sharing back your experience. I think if you can quickly touch base on changes in SQL 2016+ (like really high level) can help future reader as well. I mean if someone find this post then they can get a good idea of what they need to do. I know you are busy and may not have time for it. But if you can it will help a lot.
Or you can write a new post here and publish that as well
That book was an excellent recommendation.
SQL servers facing ERP systems have always been a little mysterious to me with my ERP partners making a big deal about how they were specially configured, but that book has made it clear that they are mostly just regular SQL servers with a small number of minor settings that wouldn't be configured in a more typical mixed oltp/olap workload.
Some of the settings recommended are pretty old but are fairly easy to figure out whether they are still relevant in SQL 2016+.
Something that he recommended for small to medium deployments that Microsoft now recommends against, is to disable automatic statistics updates and instead do a daily full scan stats update. I had statistics updates happening so often that I wasn't sure what I was or wasn't seeing in the plan cache after recompiles, but based on the limited data I have it may have improved performance slightly? Typically, I would not do such regular statistics updates to avoid parameter sniffing problems, but I noticed almost everything BC does uses 'OPTIMIZE FOR UNKNOWN' which should help avoid that.
In general I would recommend to create indexes inside of BC. But SQL server will use the indexes as needed no matter where they have been created. I find it generally safer to create them inside BC.
Also, look into below tools. It is from the author of the book
www.stryk.info/.../toolbox.html
www.stryk.info/.../magic7.html
[/quote]Cool. I have some tools I use for all my other DBA work that is very similar to the Magic 7 set, I just haven't known what indexes were safe to remove or modify or if adding indexes outside of BC would create problems for the app similar to the way a code first DB would.
Stay up to date on forum activity by subscribing. You can also customize your in-app and email Notification settings across all subscriptions.
André Arnaud de Cal... 291,240 Super User 2024 Season 2
Martin Dráb 230,149 Most Valuable Professional
nmaenpaa 101,156