Skip to main content

Notifications

Announcements

No record found.

SQL Server Performance with Business Central

SQL Server Performance with Business Central

Lately interesting articles have been published on how to optimize business central with sql server, how to better use the data cache, how to use partition tables, how to use compression, how to monitor performance, since they are interesting to report.

Optimizing SQL Server Performance with Business Central

https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/optimize-sql-server-performance

 

#1 – Business Central Server data caching

In Business Central, the data cache is shared by all users who are connected to the same Business Central Server instance. This means that after one user has read a record, a second user who reads the same record gets it from the cache. In earlier versions of Business Central, the data cache was isolated for each user.

There are two types of caches, global and private:

  • Global cache is for all users connected to a Business Central Server instance.
  • Private cache is per user, per company, in a transactional scope. Data in a private cache for a given table and company is flushed when a transaction ends.

The cache that is used is determined by the lock state of a table. If a table is not locked, then the global cache is queried for data; otherwise, the private cache is queried.

https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/optimize-sql-data-access#Compression

 

#2 – Using SQL Server table partitioning

As of Microsoft Dynamics NAV 2018, the use of SQL Server table and index partitioning is a supported configuration. The data of partitioned tables and indexes is divided into units that can be spread across more than one filegroup in a SQL Server database. All partitions of a single index or table must reside in the same database. The table or index is treated as a single logical entity when queries or updates are performed on the data. Prior to SQL Server 2016 SP1, partitioned tables and indexes were not available in every edition of SQL Server. Partitioning large tables or indexes can have the following manageability and performance benefits:

  • You can perform maintenance operations on one or more partitions more quickly. The operations are more efficient because they target only these data subsets, instead of the whole table. For example, you can choose to rebuild one or more partitions of an index.
  • You might be able to improve query performance, based on the types of queries you frequently run and on your hardware configuration. When SQL Server performs data sorting for I/O operations, it sorts the data first by partition. SQL Server accesses one drive at a time, and this might reduce performance. To improve data sorting performance, stripe the data files of your partitions across more than one disk by setting up a RAID (redundant array of independent disks). In this way, although SQL Server still sorts data by partition, it can access all the drives of each partition at the same time.
  • You can use partitioning to distribute parts of tables to different IO sub systems. For example, you could archive data for old transactions on slow and inexpensive disks and keep current data on solid-state drives (SSD). You can improve performance by enabling lock escalation at the partition level instead of a whole table. This can reduce lock contention on the table.

“How-to” Business Central supports partitioning

If you have altered tables in a Business Central database to make them partitioned tables, the synchronization engine, which is responsible for mapping the logical metamodel to physical tables, will respect this configuration during upgrades. After a schema upgrade, even if tables have been dropped and recreated, the partitioning strategy applied to the original tables will be added to the upgraded tables. You can create a partitioned table or index in SQL Server by using SQL Server Management Studio or Transact-SQL.

Table Partioning Example

This example uses Transact-SQL to change table G_L Entry to be partitioned on the Posting Date field, with data partitioned on the year, and where all partitions are aligned to the PRIMARY file group.

  1. In SQL query editor, create a partition function that creates partitions that divide on year (this can be used for partitioning multiple tables):

CREATE PARTITION FUNCTION [DataHistoryPartitionmethod] (datetime)

AS RANGE LEFT FOR VALUES (

‘20151231 23:59:59.997’,

‘20161231 23:59:59.997’,

‘20171231 23:59:59.997’,

‘20181231 23:59:59.997’ )

GO

  1. Create a partition scheme that maps partitions to file groups. In this example, all partitions are mapped to the PRIMARY file group (this can be used for partitioning multiple tables):

CREATE PARTITION SCHEME DataHistoryPartitionScheme

AS PARTITION DataHistoryPartitionmethod ALL TO ([PRIMARY])

GO

  1. In the Dynamics NAV Development Environment, add the Posting Date field to the primary key.
  2. In the Transact-SQL Editor, partition table G_L Entry by using the previously defined partition scheme:

ALTER TABLE [dbo].[G_L Entry]

DROP CONSTRAINT [G_L Entry$0]

GO

ALTER TABLE [dbo].[G_L Entry]

ADD CONSTRAINT [G_L Entry$0] PRIMARY KEY CLUSTERED

(

[$companyId], [Entry No_], [Posting Date]

)

ON DataHistoryPartitionScheme( [Posting Date] )

GO

 

#3 – Using SQL Server data compression

As of Business Central April 2019, it is possible to configure data compression directly in table metadata by using the CompressionType property in AL or CSIDE. Previously, compression could only be configured in SQL Server. You use data compression to help reduce the size of selected tables in the database. In addition to saving space, data compression can help improve performance of I/O-intensive workloads because the data is stored in fewer pages and queries will read fewer pages from disk. This is especially useful if your storage system is based on disks and not SSD.

However, extra CPU resources are required on the database server to compress and decompress the data while data is exchanged with the Business Central Server.

With the CompressionType property, you can configure row or page type compression or configure the table not to use compression. With these compression settings, Business Central table synchronization process will make changes to the SQL Server table, overwriting the current compression type, if any. You can choose to control data compression directly on SQL Server by setting the CompressionType property to Unspecified, in which case table synchronization process will not control the data compression.

To evaluate whether a table is a good candidate to compress, you can use the stored procedure sp_estimate_data_compression_savings in SQL Server.

Because SQL Server supports data compression on the partition level, you can combine SQL Server data compression with table partitioning (see the previous section) to achieve flexible data archiving on historical parts of a large table, without having the CPU overhead on the active part of the table.

*** Note

Prior to SQL Server 2016 SP1, compression was not available in every edition of SQL Server.

SQL Data Compression Link

https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/optimize-sql-data-access#Compression

 

#4 – Troubleshooting: Using Query Store to Monitor Query Performance in Business Central

What is SQL Server Query Store?

The SQL Server Query Store feature provides you with insight on database query plan choice and performance. It simplifies database performance troubleshooting by helping you quickly find performance differences caused by query plan changes. Query Store automatically captures a history of queries, plans, and runtime statistics, and retains these for your review.

Where is SQL Server Query Store available?

SQL Server Query Store is available in SQL Server (starting with SQL Server 2016) and in Azure SQL Database.

What are the common scenarios for using the Query Store feature?

Query Store keeps a history of compilation and runtime metrics throughout query execution. With this information, you can get some answers on questions about your workload, such as:

  • What was the last n queries executed on the database?
  • What were the number of executions for each query?
  • Which queries had the longest average execution time within last hour?
  • Which queries had the biggest average physical IO reads in last 24 hours, with corresponding average row count and execution count?

https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/troubleshoot-query-performance-using-query-store


This was originally posted here.

Comments

*This post is locked for comments