Skip to main content

Notifications

Announcements

No record found.

Community site session details

Community site session details

Session Id :
Dynamics 365 Community / Blogs / Magno's Blog / Bootcamp Notes: SQL Optimiz...

Bootcamp Notes: SQL Optimization

Community Member Profile Picture Community Member

I’m back again with one of my Bootcamp notes. This time a few points about optimizing for SQL Server.

There have been a lot of changes on SQL Server. It is no longer using cursors through OLE DB to connect, but MARS (Multiple Access Result Sets) through ADO.NET.

First while we are discussing SQL Server, let’s talk about Unicode again… We know we can use Unicode in our data, but not in the captions. But how the characters are stored, is something known as a collation. If you alter your database, you might notice that there is only windows collation now, no more SQL Server collations (they were not Unicode compliant). There are also a lot less to select from, actually, only the Unicode compliant collations. You can actually still select another collation which is not Unicode, but it is better not to do that, it could really mess up different characters in your data…

Collations are only used for sorting and for non-Unicode character data such as captions. Remember when upgrading to check your collations!

Keys,  Indexes and tuning
When trying to improve your keys and indexes there are a fiew things which are good to know and remember.

SQL Server is good at what it does, and that is fetching data and selecting the correct index for it. When you add a key and use this in code because you think it is faster, DON’T DO IT… SQL Server is much better at that anyway. Only use the keys if you really want those sorted that way.

When creating keys, think about selectivity. Every update potentially needs to update (several) keys… Try to keep the fields which change the least in the begin of your key. You also make nicer blocks for SQL Server to search faster in your data.
Same goes for SIFT’s. It might be better not to maintain SIFT. These are managed asynchronously, but effect SQL Performance by using resources, test which is faster in your case.

Remember the Table Overview in classic client? It is still there in the development environment. It also still has the button to optimize your tables, but it might not be a good idea to actually do this. SQL Server has it’s own options to optimize which are (again) created specifically for SQL Server.

Did I already mention there is no more enhanced security? In NAV 2013 only standard security is remaining. Simply because enhanced security was just slowing your SQL Statements. Every call to data also needed to check you permissions on that object, let alone NAV still needing to do it because of possible indirect permissions…

About tuning, SQL Server has some built in possibilities for tuning. It is called the Tuning Advisor. First you start by running the SQL Server Profile for a while. You save the log of the Profiler and present this to the Tuning Advisor. It will tell you some suggestions about keys you might improve. It is possible it suggest keys you cannot make from NAV. But if you have a database which does not change often in terms of coding, you might consider letting Tuning Advisor apply the changes directly to SQL Server. Just remember this when you do make changes, since NAV will overwrite them.

Coding
There are a lot of optimizations you can make just by thinking about performance when you code, so let’s list the most important here as well.

get statement
The get statement is actually fast because it uses the clustered index always, you supply the key fields of the clustered index. When you can use get, DO use it.

Next statement
The next statement holds a few possible performance issues… It can cause a new select query to SQL Server in a few cases:
– filter changed
– sorting changed
– key value changed
– updating without indicating you will update

It is always good to remember not to change your request. Set your filters and loop the record, do not change. An example as below would be a good candidate to create a Query for:

CustLedgEntry.SETCURRENTKEY("Customer No.");
IF CustLedgEntry.FIND('-') THEN
  REPEAT
    MESSAGE(CustLedgEntry."Customer No.");
    CustLedgEntry.SETRANGE("Customer No.",CustLedgEntry."Customer No.");
    CustLedgEntry.FIND('+');
    CustLedgEntry.SETRANGE("Customer No.");
  UNTIL CustLedgEntry.NEXT = 0;

Sums, Count and FlowFields
CountApprox is discontinued. Count still works, but when checking if there are records, use ISEMPTY instead!

CalcSums cannot be improved sadly, just make sure you have goods SIFT’s.

CalcFields can be improved. Imagine this scenario. You loop through 50.000 customers and in your loop you do a CalcFields to know their balance. This would mean you make a whopping 50.001 requests to SQL Server. 1 to fetch all customers and for every customer you get his balance. Microsoft created a new statement to improve this: SetAutoCalcFields. You call the SetAutoCalcFields the same as the CalcFields, but you set it before issuing your FIND statement. With our customer loop, this will result in 1 request to SQL server instead of the 50.001… Talk about an improvement!

Find statements
There are different cases on how to use find statements. Let’s start with history. NAV was using ISAM before, which actually did not care if you used FindFirst of Find(‘-‘), since ISAM was basically like a tape, it roll’s to the beginning of your set.

Now it is better to use FINDFIRST or FIND if you only need the first record. Don’t do next, because another request will be fired to SQL Server.
Same for FINDLAST
Use Find(‘-‘) if you only expect a small set of records, because it will prefetch the first 50 records.Use FindSet if your really want all records.

Comments

*This post is locked for comments