Skip to main content

Notifications

Announcements

No record found.

Optimizing X++ compiler performance on your box.

Introduction

So you have been following this blog, and learned a few things about the X++ language, perhaps picked up some dos and don’ts along the way. All good, if a little theoretical, perhaps. Today we will get down to the nitty gritty of something that makes a real difference: How to achieve the best performance from the X++ compiler. I will explain how the X++ compiler is different from other compilers, and why. Then I will give a few tips that can make a huge difference in the compilation time.

Why does it take so long?

The X++ language does not have the concept of namespaces. Instead source code is collected in models, and several of these can be organized in packages (also known as modules). A package is the unit of deployment and comes in the form of an assembly and some netmodules (see below). Of course, the compiler generates tons of other artifacts as well, but that is for another time.

Imagine that you are making some change in a huge model like the Application suite model. That model has millions of lines of code out of the box. If you want to fix a typo in a comment you would not want to compile all that code again – It would take a very long time, killing productivity. To avoid that, the compiler uses the concept of .NET netmodules to contain subsets of the MSIL code, instead of storing everything in one behemoth assembly. The concept of netmodules is one part of the .NET framework that is not very well known, simply because there is very little use for them. If MyClass is changed, then we will compile that class into a netmodule. But we do not want to generate netmodules with only one class so we will also compile a set of other classes even if they have not been changed – In the current implementation we take around 50 classes with similar names and compile them into the netmodule. The exact details about how many artifacts to put in a netmodule, and how these are chosen is our secret sauce and is subject to change. You may not make any assumptions about how this works. It is a balancing act: We do not want to use too few netmodules because then the compilation time would be too long (because more files would be compiled into the netmodule, which would increase the cost of changing a few changed artifacts). On the other hand, you do not want to have too many either, because then it would take forever to spin up the code because the system would need to load every netmodule (and perhaps its .pdb file as well).

The conclusion is: Even if you change only a few files you may end up compiling many files, generating several netmodules in the process.

During compilation, the system needs to consume metadata that describes the artifacts used in your code. It takes time to load all this information and transform it from XML into a binary format. To summarize, the compiler is doing a lot of work. Fortunately, the compiler is built to scale well and to handle these tasks well. Let’s see how we can get the best experience.

By the way, avoid compiling any models that have not changed, or that do not have any dependency with the package or module from the ones you are working actively, especially those that are delivered out of the box from Microsoft. Once you have deployed or installed X++ Microsoft packages, there is no need to recompile them in your own box.

Now that we have that out of the way, let's see how we can make the best of this architecture.

Hardware considerations

The compiler needs all the CPU cycles it can get. It is as simple as that: You need a powerful machine to get good results. I am using a desktop machine with a single Intel(R) Xeon(R) W-2235 CPU @ 3.80GHz processor. I typically run one or more virtual machines on that box, splitting the 12 cores among them. It works well. Do not use the Pentium 365 laptop from your grandmother’s estate. This is sort of obvious, I know.

Get lots of RAM

As I hinted above, the compiler needs to load lots of metadata to resolve types from artifacts that are not in our own module. This (and the fact that it needs to store a lot of abstract syntax trees in core during compilation) means that it has a large memory footprint. The official requirement is 16GB of RAM, but you may be better off with more than that, depending on your scenario. I have 64GB of physical RAM chips that I can distribute among my virtual machines.

Use SSDs

The data access performance is also important: Some phases of the compilation are I/O bound. You should use Solid State Drives (SSDs), not the slow old-fashioned ones with spinning magnetic disks. Note that if you use virtual machines hosted in the cloud the disks are typically an abstraction, not necessarily backed by an actual physical device. You can get (virtual) disks of different quality by depending on how much you pay. Get the best you can afford. The amount of disk space you need depends on how many models you want to keep on your machine, of course, but the need for speed is universal.

At this point we have set up a nice box that we can use and installed all the bits. Now let’s consider what we can do with software and configuration to increase performance after that.

Software / configuration considerations

In this section we describe tweaks you can make to your software setup.

Turn off cross reference.

Around a third of the compilation time is spent updating the cross-reference information in the SQL database. The cross reference is useful: It is what powers the Goto definition experience, among several other things. You can turn it off when doing incremental builds in the property page for the project:

5557.pastedimage1622498210158v2.png

Or just turn it off in general for compiling packages:

8726.pastedimage1622498224575v3.png

When you have made many changes and want the “find all references” functionality etc. to work accurately again, just enable it again and recompile. If you are using the xppc.exe command line compiler in your build scripts, you will want to include or not include the -xref command line switch.

Turn off the best practices checks.

Turning off the best practice checks is not a great idea since they do not take a long time to run. They are aggregations of many years of experience and they are safety nets that can save you many hours down the line. However, if you must, you can turn it off by using the settings in the options menu:

7624.pastedimage1622494686810v1.png

Remember to turn it back on when you need it.

Cap your SQL usage.

Speaking of SQL: The SQL server serves data to whatever X++ you are running and for the storing the cross reference as mentioned before. The SQL server will race to consume as much memory it can to cache results to provide the best possible performance to its clients. Because of this greedy behavior it may starve other processes (like the X++ compiler for instance) for memory. When it does, the processes fight for resources, forcing the operating system to do expensive page swapping. Make sure the amount of memory the SQL server is allowed to use is reasonable. I like to use a maximum memory of around 3GB. Here is how to set it:

  • Open the SQL server management studio.
  • In Object Explorer, right-click a server and select Properties.
  • Click the Memory node.
  • Under Server Memory Options, enter the amount that you want for Maximum server memory.

Kill the Batch process (and anything else that is not strictly needed).

If you are not using during your X++ development session batch service scenarios, you can turn off this service. During the F5 scenario (i.e., the Compile/Build – Debug session) Visual Studio shuts down the batch process before the compilation operation (because that process can have loaded assemblies that are being compiled in its Appdomain) and then restarts it and waits for it to be fully operational. Those two operations (stop – start) take some time, so if you are not building, debugging, and testing your code under batch scenario, so will be better off manually turning off this service.

Antimalware executable.

I hope you made it this far into this blog item: This is the most surprising option, and it is probably the one with the largest impact. The Windows OS uses a process called msmpeng to scan your system for malware. This is a Good Thing, obviously, but sometimes the setup is very intrusive: In a certain setup the system will analyze whatever app you are starting and not allow it to run before it is done with its malware analysis. This bogs down performance. There are ways that you can circumvent this issue, but you should make sure that doing so does not expose a security risk and is consistent with your company rules. Please do not attempt without proper guidance. Your company may lock down some of these possibilities. If you believe this is appropriate, you can exclude the bin folder containing the xppc.exe, xppbp.exe and all its supporting files from malware analysis which will cause the compiler and its associated tools to start much more quickly. Make sure that you never share this folder to limit exposure. The way to exclude the folder depends on the operating system you are running. One way is to press the windows key, type settings. Select Update and Security and Windows Security and then Virus & Threat protection. Under the Virus and threat protection settings, select manage settings and press Exclusions. Add the bin directory or the AOSService directory to the list of excluded folders.

Please NOTE: You should NOT put these exceptions in place on production systems, or those which produce the production version/packages of which are then released (i.e. official build or pipeline systems).  This also applies to systems used to publish these assemblies that they may be consumed by production systems (i.e. release pipeline systems or the system they’re published via Visual Studio, VSCode, Powershell, etc).

Comments

*This post is locked for comments

  • Peter Villadsen Profile Picture Peter Villadsen
    Posted at
    The problem SvenJ mentions has now been fixed. Thanks for reporting it.
  • Sven Joos Profile Picture Sven Joos 811
    Posted at

    Hi Peter, thanks for this post. But did you know, that switching off the checkbox "build cross reference data" actually has no effect? If you check the command that is running on your box, you will see, that although the checkbox is "off", the command will still hold parameters for building the cross references. (see my blog article : svenjoos.wixsite.com/.../decrease-compile-time-by-switching-of-xref-generation)

    Sven, I have added a bug and will investigate soon! Thank you for bringing it to our attention.