Personalized Community is here!
Quickly customize your community to find the content you seek.
Have questions on moving to the cloud? Visit the Dynamics 365 Migration Community today! Microsoft’s extensive network of Dynamics AX and Dynamics CRM experts can help.
2021 Release Wave 2Discover the latest updates and new features releasing from October 2021 through March 2022.
2021 release wave 2 plan
The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence.
FastTrack Community | FastTrack Program | Finance and Operations TechTalks | Customer Engagement TechTalks | Upcoming TechTalks | All TechTalks
In my travels around the country from one Dynamics AX customer to another the customer is usually aware of how important database Statistics are to the overall performance of Dynamics AX. The one thing I do not find though is Trace Flag 2371 enabled (or even know that it exists) or an in-depth understanding of how Microsoft SQL Server handles automatically updating those very importantdatabase statistics. Could it be Dark Magic that keeps these statistics updated? In this BLOG article I will try to demystify this process and attempt to convince you to embrace Trace Flag 2371 and enable it in your Dynamics AX environment.
Auto-Update Statistics (SQL Server Database Setting)
In the database settings for each database hosted on SQL Server there is setting named “Auto Update Statistics” and it can either be set to “No” or “Yes”. From a Dynamics/SQL Service Best Practices perspective this should always be set to “Yes” unless directed by a Microsoft support engineer. When this is set to “Yes” it allows SQL Server to automatically update the existing statistics in the database. The more accurate the statistics the faster the response time of a query (in most cases) will be. The question then becomes how does SQL Server decide when to “Auto-Update” a statistic and how does it work? Currently SQL Server will “mark” a statistics to be “auto-updated” when 20% of the data in that particular statistic has changed. Once it is “marked” it is actually updated the next time some sends a query to SQL that uses that statistic. So matter how many rows are in that statistic SQL Server uses a static 20% threshold to determine when to “auto-update” a particular statistic. This can cause issue with large tables that have 25, 50, 100 million rows, or more. Just think if you have an INVENTTRANS table with a 100 million rows which means each statistic on that table has 100 million rows Auto-Update Statistics is not triggered until 20 million rows change, which means it could days, weeks, or even months in some cases for a statistic to trigger an “Auto-Update”. This can cause significant performance issues as even 1% change in data can be statistically significant enough for SQL Server to alter an execution plan. This means as the statistic gets older and more “stale” you run this risk of SQL Server using an index scan instead of an index seek which will degrade overall performance. This is the reason why we always recommend that even with “Auto-Update Statistics” enabled that customers have a nightly SQL maintenance job that updates all statistics with “UPDATE STATITICS” or “SP_UPDATESTATS” to guarantee the statistics on large tables get updated at least daily.
In large high volume Dynamics AX deployments sometimes even daily updating of statistics on these large tables is not enough to keep up with the new and modified data and the statistics can become “stale” quickly. There have been occasions in the past were very large volume customers have been forced to run “SP_UPDATESTATS” multiple times a day just to maintain consistent optimal performance where “stale” or out of date statistics were causing significant performance issues. This is not ideal as we usually do not want this processing running in the background during peak production hours, the process updates stats on anything where data has changed so it does a lot of unnecessary work, rebuilding all statistics on any table that data has changed since the last update statistics running the risk of causing its own performance issues as all plans then need to be recompiled putting an unnecessary load on the CPU. We really need is a more targeted approach. A process that takes into account how many rows are in a table and uses that information to judge when “Auto-Update” of a statistic should be triggered and not a static 20%. We also need a processthat does not rebuild statistics on smaller tables unnecessarily just because data has changed creating CPU overhead with unncessary recompiles. And a process that would allow even the heavier volume customers to only run “SP_UPDATESTATS” once a day and then only for precautionary reasons.
SQL Server Trace Flag 2371
Enter SQL Server Trace Flag 2371. This Trace Flag was released with SQL Server 2008 R2 SP1 and all later versions. There was a small update to this Trace Flag in SQL Server 2008 R2 SP2 and SQL Server 2012 SP1 that makes it more precise. What Trace Flag 2371 does is that it changes the static 20% threshold currently used by the “Auto-Update” statistics process to a dynamic threshold based on the number of rows. The larger the table the lower the threshold the more often the statistics are “Auto-Updated”. This Trace Flag was essentially added expressly to SQL Server for ERP applications such as Dynamics AX that use cursors and rely heavily on singleton look-ups. Given all this information we believe that there should not be any reason why Dynamics AX customers should not implement this Trace Flag given its obvious benefits on overall system performance and as it addresses one of the biggest performance concerns for AX when it comes to SQL Server.
Image “stolen” from BLOG article
Business Applications communities