AX 2012 (R2) Compilation Performance and Best Practice

Question Status

Verified
Tommy Skaue asked a question on 1 Mar 2013 7:05 AM

I would like to hear others share their experience with possible best practice and recommendations around getting AX 2012 compilation time down to a bare minimum.

We know a X++ compilation involves writing to disk on the AOS server and writing transactions to the model database.

We used to have a Best Practice Whitepaper and Checklist for Dynamics AX 2009, and many of the points are still valid for the next generation Dynamics AX. I still would like to hear from the community what hints and tips they would like to share regarding this specific process; compiling AX 2012.

AX2012 may take 2-3 hours. AX2012 R2 over 5 hours. Can anything be done to reduce the compilation time without "triple" the hardware?

Reply
Dominic Lee responded on 7 Nov 2013 8:32 PM

Hi all,

I just noticed there is a new compile tool call AxBuild.exe in R2 CU7. Below is the result the Dev team got:

The wall clock time for AxBuild.exe was compared to the time for a legacy client-side full X++ compile. The results were as follows:

  • AxBuild.exe:    10.7 minutes.

  • MorphX client:    146.0 minutes.

  • 146.0  /  10.7  =  13.6

Exciting! I am in the process of getting a working CU7 installation. (I have trouble upgrading from CU6 =( ). Once I got my test result I'll post here.

I encourage others to do some test and post the result here too. =)

P.S. For how it works and other details: http://msdn.microsoft.com/en-us/library/dn528954.aspx

Reply
Stefan Lundquist responded on 7 Nov 2013 11:22 PM

Anyone tried to install just the CU7 kernel on an previous version of Ax and tested AxBuild ?

Reply
Suggested Answer
Joris de Gruyter responded on 8 Nov 2013 5:58 AM

Yes, if you read the documentation on MSDN ( msdn.microsoft.com/.../d6da631b-6a9d-42c0-9ffe-26c5bfb488e3.aspx ) it talkes about issues with COM objects. In fact, prior to CU7 there is a form called SysInetHtmlEditor that has this issue. You need to fix that to get rid of two compile errors, but otherwise it works great.

I blogged about this here: daxmusings.codecrib.com/.../what-you-need-to-know-about-15-minute.html

Reply
Suggested Answer
Daniel Weichsel responded on 8 Nov 2013 6:59 AM

On top of the COM issues mentioned by Joris, you can get locked into the newer kernel version by connecting with a CU7 AOS.  I received this failure on a CU6 environment after having run an AxBuild compile from a CU7 AOS:

Object Server 01:  Fatal SQL condition during login. Error message: "The internal time zone version number stored in the database is higher than the version supported by the kernel (5/4). Use a newer Microsoft Dynamics AX kernel."

I pushed past this by setting the timezone version back (in the SqlSystemVariables table), but ONLY because it was on a test/development environment.  I would not do that for an important system.

Reply
Tommy Skaue responded on 9 Nov 2013 7:05 AM

I guess most installations will upgrade binary components to the latest build (CU7) now, even if they are running an earlier version of the application (R2 only). The new compilation time is stunning. Well served by Microsoft, indeed. :-)

Reply
Brandon Wiese responded on 9 Nov 2013 7:27 AM

I get conflicting answers on whether running binaries from one version with application from another is supported.  I remember reading once on a Sustained Engineering blog post that Microsoft will not create a binary to application dependency on a hotfix or hotfix roll-up, so in theory any post-R2 binaries should be compatible with any post-R2 application, and the same with 2012 RTM binaries with 2012 RTM application.  Unfortunately I have personally witnessed Microsoft break this rule.

Has anyone actually DONE this combination of R2 application and R2 CU7 binaries in either a test or production environment?

What are these continued compilation performance increases of which you speak?  Is it even better than CU6?  If I install CU7 binaries on an R2 application, update the model store schema and database sync, is that all it takes to realize these performance benefits?

Reply
Suggested Answer
Tommy Skaue responded on 9 Nov 2013 7:34 AM

I have tested trying to compile a CU6 modelstore just by pointing the axbuild to a CU6 modelstore, but the tool will ignore this and instead compile the modelstore for the aos configuration setup on the system.

Daniel has tested compiling a CU6 modelstore by changing the AOS configuration, but then the modelstore could not be used using a CU6 kernel without doing a SQL hack.

I have tested the crazy thing as trying to run axbuild from a CU6 AOS bin folder, but they've added additional code to the binaries supporting this new build routine (obviously). ;-)

I am sort of concluding that the best solution is updating all binary components to CU7, all except "database". You will need to go through a proper code upgrade process first, making sure your code and any ISV or VAR code is successfully upgraded to CU7.

Reply
Brandon Wiese responded on 11 Nov 2013 6:26 AM

9.5 minute compile on a vanilla CU 7 system.  Jaw dropping, and very satisfying indeed to see a multiple core system pegged at 100% I must say.  Finally.  Finally.  Maybe I can start getting some sleep again.

EDIT: SQL Server 2012 SP1 CU6, 8 processors, Hyper-V guest on Windows Server 2012.  Client/AOS/SQL all on one OS.  Completely vanilla installation.

Reply
E.K. responded on 31 Oct 2014 7:51 AM

Virtualizing shaves 20 percent off from the SQL performance and network makes latency.

Consider running one AOS instance in the physical SQL box with enough memory.

Reply
Verified Answer
Joris de Gruyter responded on 1 Mar 2013 1:57 PM

It's a big issue indeed. As shown there's some ways to shave off some time, but in the grand scheme of things it's not a whole lot.

Problem is really the X++ compiler, it simply doesn't scale. Notice how it uses just 1 core, and not even at 100%. Maybe with superfast SQL and disk it may use 100%, of one core only. Remember they started developing AX originally in the early 80s I believe, so 8086 type stuff with the memory restrictions involved, and the first release of AX came out when the first pentiums were brand new. The architecture of the compiler hasn't really changed since then.

Bottom line, it doesn't scale so doesn't matter a whole lot how much hardware you throw at it. Problem is the legacy history of the X++ compiler versus the amount of new technology and code they've added to 2012 and R2.

Reply
Verified Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Verified Answer
Mark Veurink responded on 14 Mar 2013 5:47 AM

We tested several different scenarios, and we found out that the following is best practice:

- Install AOS en SQL on the same box (splitting over two boxes will double compile time)

- Use latest CPU architecture with 2x vCPU (1x vCPU will double time, the same to older CPU)

We timed a full compile of AX2012R2 on a VMware system with 2x vCPU Intel XEON E5-2690 2.9Ghz in 2 hour and 05 minutes. The same compile on the same Virtual Machine on an older XEON 5650 used up to 4 hour and 35 minutes. This means that using the latest CPU architecture will speed things up quite good.

Although... I personally find it really strange Microsoft is not putting more effort to this problem. Not all customers are running "the latest and greatest" hardware, and placing a hotfix in the production environment (we of course hope we never have to do this but..) is getting a more and more daunting task.

tip!

If the customer does not want to buy more recent hardware, let him buy a latest Core i7 workstation. Use this workstation (install AOS and SQL on this machine) only for full compile and full CIL actions.

Reply
Verified Answer
Joris de Gruyter responded on 25 Apr 2013 8:35 AM

Alright, I just posted it. Read all about it here: daxmusings.codecrib.com/.../dynamics-ax-2012-compile-times.html

40 minutes. That's right :-)

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 9:18 AM

Inheritance is definitely a big deal, but not the only thing.

For inheritance, try to inherit from RunBase. Compile. Then change the inheritance to inherit from RunBaseBatch instead. Your runbasebatch methods will not show up on your class (in intellisense when you try to use the class from code). The only way to get that class fixed is to go to RunBaseBatch and compile forward on that base class. The compiler won't tell you this though.

As for single object compile... What if you change the name of a table field for example? That table compiles fine and your X++ classes also think it's fine (it may actually run fine if the object IDs didn't change). But as soon as you try to generate CIL, or move the code to another environment and new IDs are assigned, things don't compile in classes referring to the old field names.

CIL generation has complicated matters since it's more strict (which is a good thing) than what we had  before. CIL technically does a full generation every time (don't trust the incremental! there's easy ways to show the incremental is flawed).

These are just a few easy examples (I'm sure there's even more interesting examples with changing properties on tables/fields/services/etc). They show the need for full compiles. Whether you can do the full compile in bits and pieces (back on topic) is a good question. But I wouldn't trust it entirely :-)

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 11:26 AM

If you're implementing a full algorithm for dependencies etc, you should look at some of the startup classes that deal with XPO imports. There's some logic there about recompiling etc. I think it's purely based on compiler output an deciding to keep recompiling certain pieces until there are no more errors or until some threshold is reached.

Reply
Suggested Answer
Maris.Orbidans responded on 18 Mar 2013 6:16 AM

Sorry I could not resist :) Probably I repeat some previous comments:

Ax compilation watching (like birds watching) revealed the Ax compile is performed in sequential manner - little bit SQL, some more AOS, and little bit Ax again in the loop.

It begs for test:

1)  take best CPU for single thread operation. Here is some info about it:

www.cpubenchmark.net/singleThread.html

I would take inexpensive Core i5-3570S from that list

2) take very fast ram ~16-32 GB (I think 32 GB is max for core i5 cpu) and board which allows this ram to operate (overclockers sites can help)

3) do not use virtualisation - undeniably it cause extra overheads

4) run all Ax components on this machine (all components of standard Ax required for compilation feels rather ok starting from 8GB RAM).

Enjoy.

A and would be nice to overclock this thing to 4+ Ghz

A and CIL build (after compilation) is different kind of animal - it uses parallel threads and can employ many cores.

Reply
Suggested Answer
Joris de Gruyter responded on 25 Apr 2013 7:49 AM

Six days, that is crazy. You need to review your setup. I'm about to post a blog entry about this. I got the compile down to 40 minutes =)

Reply
Suggested Answer
Joris de Gruyter responded on 30 Apr 2013 12:09 PM

Hotfix for R2 compiler speed released: blogs.msdn.com/.../ax2012-r2-hotfix-available-improves-compile-speed.aspx

Reply
Suggested Answer
Christian Moen responded on 6 May 2013 3:03 AM

Hi,

I just tested the released compile-optimization support.microsoft.com/.../KBHotfix.aspx, and it was not an amazing improvement.

My test server, VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM with AOS and client (database on a dedicated SQL Server), a full compile was reduced from approx 5 hours down to 3 hours, 40 mins.

An improvement, but still not satisfying.

Reply
Suggested Answer
Joris de Gruyter responded on 15 May 2013 9:44 AM

Thanks Dan, I was going to come over to your desk and ask for this :-)

Reply
Suggested Answer
Christian Moen responded on 15 May 2013 11:53 AM

Hi all,

an update on my previous post. Still running the following server:

VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM

However installed AOS, client and SQL Server 2012 locally (disabled TCP/IP in the SQL configuration, just used Shared Memory).

Compilation with binary 6.2.1000.413 (not containing the latest compile optimizations) now took 2 hours, 50 minutes.

Based on this, it seems the kernel enhancements are primarily related to the TCP communication between the AOS and database.

Adding 2 more vCPUs, dedicating 2 CPUs to AOS, 2 CPUs to SQL Server and adding some more memory probably will push the compilation time further - at least we are moving in the right direction :)

Reply
Suggested Answer
Volker Deuss responded on 16 May 2013 2:17 AM

I don't think this was mentioned in here before:

The thing which had the biggest (positive) impact on compile times in our installations here was to change the setting in Control Panel / Hardware and Sound / Power options to "High performance" instead of "Balanced" which seems to be default, even in Windows Server... Chosing this option sounds kind of obvious but it definitely is something you want to check.

Reply
Suggested Answer
Joris de Gruyter responded on 21 May 2013 7:43 AM

Hi Tommy, if the KB only gave you a few extra minutes, it's more likely due to SQL caching than anything else :-)

Reply
Suggested Answer
Joris de Gruyter responded on 8 Nov 2013 5:58 AM

Yes, if you read the documentation on MSDN ( msdn.microsoft.com/.../d6da631b-6a9d-42c0-9ffe-26c5bfb488e3.aspx ) it talkes about issues with COM objects. In fact, prior to CU7 there is a form called SysInetHtmlEditor that has this issue. You need to fix that to get rid of two compile errors, but otherwise it works great.

I blogged about this here: daxmusings.codecrib.com/.../what-you-need-to-know-about-15-minute.html

Reply
Suggested Answer
Daniel Weichsel responded on 8 Nov 2013 6:59 AM

On top of the COM issues mentioned by Joris, you can get locked into the newer kernel version by connecting with a CU7 AOS.  I received this failure on a CU6 environment after having run an AxBuild compile from a CU7 AOS:

Object Server 01:  Fatal SQL condition during login. Error message: "The internal time zone version number stored in the database is higher than the version supported by the kernel (5/4). Use a newer Microsoft Dynamics AX kernel."

I pushed past this by setting the timezone version back (in the SqlSystemVariables table), but ONLY because it was on a test/development environment.  I would not do that for an important system.

Reply
Suggested Answer
Tommy Skaue responded on 9 Nov 2013 7:34 AM

I have tested trying to compile a CU6 modelstore just by pointing the axbuild to a CU6 modelstore, but the tool will ignore this and instead compile the modelstore for the aos configuration setup on the system.

Daniel has tested compiling a CU6 modelstore by changing the AOS configuration, but then the modelstore could not be used using a CU6 kernel without doing a SQL hack.

I have tested the crazy thing as trying to run axbuild from a CU6 AOS bin folder, but they've added additional code to the binaries supporting this new build routine (obviously). ;-)

I am sort of concluding that the best solution is updating all binary components to CU7, all except "database". You will need to go through a proper code upgrade process first, making sure your code and any ISV or VAR code is successfully upgraded to CU7.

Reply