Question Status

Verified
Tommy Skaue asked a question on 1 Mar 2013 7:05 AM

I would like to hear others share their experience with possible best practice and recommendations around getting AX 2012 compilation time down to a bare minimum.

We know a X++ compilation involves writing to disk on the AOS server and writing transactions to the model database.

We used to have a Best Practice Whitepaper and Checklist for Dynamics AX 2009, and many of the points are still valid for the next generation Dynamics AX. I still would like to hear from the community what hints and tips they would like to share regarding this specific process; compiling AX 2012.

AX2012 may take 2-3 hours. AX2012 R2 over 5 hours. Can anything be done to reduce the compilation time without "triple" the hardware?

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Reuben Gathright responded on 1 Mar 2013 9:56 AM

We have been using an ASUS KGPE-D16 motherboard with dual AMD Opteron 6128 processors, 64Gb DDR3 and 6 Intel 520 series SSD drives all connected as various logical drives.  SQL Server 2008 and AX 2012 R2 are both installed on the same box running Windows Server 2008 x64... compilation times still take over 4 hours to complete.  

The same box can compile an AX 4.0 code file filled with customizations in less than 30 minutes.

The problem we are seeing is that the AOS is still single threaded, even though it is now a 64bit application.

Reply
Tommy Skaue responded on 1 Mar 2013 11:23 AM

Thanks for sharing, Reuben! Your setup looks pretty good. I don't have 6 SSD disks, but the rest looks similar.

Is this AX2012 or AX2012 R2 compiling?

Since you are running this on the same box, and you have plenty of RAM, I would assume any bottlenecks will either be CPU or disk. Are you running RAID 5, 6 or 10? I've heard RAID 6 might be safer but RAID 5 is quicker. RAID 10 would be even more quicker, but steals even more diskspace.

Did you set Recovery Mode to Simple or Bulk in order to improve SQL Server performance?

What is your Max Degree of Parallelism (MAXDOP) while compiling? Recommended setting for AX is 1, but maybe it could be 0 while compiling? Maybe someone knows if compiling spawns multiple query batches which would mean setting MAXDOP to 0 which would spawn as many as possible.

Have you looked at how threads are being spawned over the CPUs while compiling, and also how IO is performing? Since the AOS compiles on 1 thread, and this thread seemingly sticks to 1 processor, can we be sure UI or SQL Server do their business on other processors? :-)

It would be nice to find an optimal set of settings, so even if the hardware isn't state of the art, it will still compile as quick as possible.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Reuben Gathright responded on 1 Mar 2013 11:52 AM

Tommy, you seem just like me... eager to get answers about AX 2012 R2 compiling times.  I used RAID 0 and put the code, database mdf and ldf files in their own SSD drives which is why I had to use 6 SSD SATA drives.  Recovery Mode to Simple is the option I choose.

No, the AX 2012 R2 compiler will not spread out its cpu load across all 16 cpus on these AMD Opterons.  I use a meager amount of 3% of available cpu power when compiling.  

Now, my previous post was made in a rush.  I meant to conclude my statement by saying that I am very eager to find a faster cpu for this server.  We are dealing with a single-threaded compiler (from my understanding so far) and that is going to demand a high clock frequency processor with loads of Level 2 and Level 3 cache to improve performance during compilation.  

For example, I used to manage a database in Microsoft Access that stored casino reward points.  Everyday, the system would synchronize all card swipes to give the player his new point total.  The system took hours to tabulate all these points on an eight core 2.5 Ghz Intel Xeon processor.  The solution?  I built a custom server using an Intel E8600 cpu which offered 3.3 Ghz of number crunching power.  The result was a synchronize time of just 1 hour because the processor ran at a higher clock speed and had loads of on-chip cache.

In conclusion, I am very tempted to just buy an Intel Core i7-3770K 3.5Ghz Quad-Core Processor and motherboard to dedicate as my development server.  True, I will not be running Windows Server 2008 on an officially recommended box, but it will cost me thousands less than buying a comparable Intel Xeon.

Anyone else out there have success with high clock rate cpus when compiling AX 2012 R2?

Reply
Tommy Skaue responded on 1 Mar 2013 12:14 PM

Hmm. That is interesting! So you want to focus on the actual CPU and it's ability to crunch the operations fast enough. Well, I have a box here with 3.6Ghz and 4 cores (screencast.com/.../ltJTxuCM4Gn). I will try see if it can chew better than the other box I have. :-)

What about SQL Server itself? We also know every element being compiled cause several inserts into the database. Are we sure the database is performing at it's best? Could we improve performance by making improvements on the indexes or statistics prior to the compilation?

I am planning to setup SQL Server 2012 (Ent), but I don't expect huge performance improvements over SQL 2008 R2 in regards of compilation time.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Reuben Gathright responded on 1 Mar 2013 12:26 PM

No, in SQL Server 2008 and using other disk monitor utilities, there is very little I/O demand while compiling.  SQL Server 2012 is mostly meaningless in the face of this fact.  I encourage you to check this out for yourself!

However, as with all AX database systems, they love low access time.  SSD's offer 0 ms, rotating 10K drives still only can get at best 5 ms access time.  If you get the chance to try out that 3.6Ghz box, why not use some SSD's at the same time to build a solid dev box.  FYI, we use Intel 520 Series SSD's in our HP Proliant servers and have had no problems.

Reply
Tommy Skaue responded on 1 Mar 2013 12:54 PM

Well, thank you for sharing your insights and experience.

It would be interesting to see who could come up with the decent spec for a build server with an affordable price. ;-)

So basically getting a CPU that can crunch operations and having your SQL Server on quick disks is your key points.

Does your RAID Controller support TRIM as well? How big are your disks?

I found this interesting post around RAID 0 with SSD:

www.anandtech.com/.../trim-raid0-ssd-arrays-work-with-intel-6series-motherboards-too

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Verified Answer
Joris de Gruyter responded on 1 Mar 2013 1:57 PM

It's a big issue indeed. As shown there's some ways to shave off some time, but in the grand scheme of things it's not a whole lot.

Problem is really the X++ compiler, it simply doesn't scale. Notice how it uses just 1 core, and not even at 100%. Maybe with superfast SQL and disk it may use 100%, of one core only. Remember they started developing AX originally in the early 80s I believe, so 8086 type stuff with the memory restrictions involved, and the first release of AX came out when the first pentiums were brand new. The architecture of the compiler hasn't really changed since then.

Bottom line, it doesn't scale so doesn't matter a whole lot how much hardware you throw at it. Problem is the legacy history of the X++ compiler versus the amount of new technology and code they've added to 2012 and R2.

Reply
Tommy Skaue responded on 1 Mar 2013 3:33 PM

Well, that might be, but has the AX 2012 code really become 4-6 times more than AX 2009?

What else is added in Ax2012 that makes X++ compilation take 4-5 hours, compared to 30 minutes in the past?

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Stefan Lundquist responded on 4 Mar 2013 1:39 AM

Turning off Best Practise checks before compiling helps  a bit.

Reply
Reuben Gathright responded on 4 Mar 2013 4:55 AM

Tommy,

So far I have discovered that AX 2012 has increased the complexity of Inventory Dimensions.  The added tables require many more lines of code to support the new relationships along with providing the appearance of seamless data integration to our user base.

IMO, someone at Microsoft should examine the X++ compiler at an assembly level and build in optimizations for at least the SSE, SSE2 instruction sets.  As you notice, the AOS is now 64bit which means that the machines running this newer architecture also very likely support the modern instructions.

Reply
Tommy Skaue responded on 4 Mar 2013 7:00 AM

Interesting. Turning off unnecessary "chit-chat" (aka Best Practices) should shave off some minutes hopefully, unless you need it for some reason. I also noticed that disabling certain licenses generates extra synchronization time, but I haven't noticed any additional compilation time.

As for the compiler instructions being outdated, I would be surprised if they do anything about that in AX 2012. Perhaps if we cry loud enough, they will fix/improve it in AX 2015.

I guess the best thing we can hope for is to configure the environment as best we can given the current compiler engine.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Verified Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Verified Answer
Joris de Gruyter responded on 1 Mar 2013 1:57 PM

It's a big issue indeed. As shown there's some ways to shave off some time, but in the grand scheme of things it's not a whole lot.

Problem is really the X++ compiler, it simply doesn't scale. Notice how it uses just 1 core, and not even at 100%. Maybe with superfast SQL and disk it may use 100%, of one core only. Remember they started developing AX originally in the early 80s I believe, so 8086 type stuff with the memory restrictions involved, and the first release of AX came out when the first pentiums were brand new. The architecture of the compiler hasn't really changed since then.

Bottom line, it doesn't scale so doesn't matter a whole lot how much hardware you throw at it. Problem is the legacy history of the X++ compiler versus the amount of new technology and code they've added to 2012 and R2.

Reply
Verified Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Verified Answer
Mark Veurink responded on 14 Mar 2013 5:47 AM

We tested several different scenarios, and we found out that the following is best practice:

- Install AOS en SQL on the same box (splitting over two boxes will double compile time)

- Use latest CPU architecture with 2x vCPU (1x vCPU will double time, the same to older CPU)

We timed a full compile of AX2012R2 on a VMware system with 2x vCPU Intel XEON E5-2690 2.9Ghz in 2 hour and 05 minutes. The same compile on the same Virtual Machine on an older XEON 5650 used up to 4 hour and 35 minutes. This means that using the latest CPU architecture will speed things up quite good.

Although... I personally find it really strange Microsoft is not putting more effort to this problem. Not all customers are running "the latest and greatest" hardware, and placing a hotfix in the production environment (we of course hope we never have to do this but..) is getting a more and more daunting task.

tip!

If the customer does not want to buy more recent hardware, let him buy a latest Core i7 workstation. Use this workstation (install AOS and SQL on this machine) only for full compile and full CIL actions.

Reply
Verified Answer
Joris de Gruyter responded on 25 Apr 2013 8:35 AM

Alright, I just posted it. Read all about it here: daxmusings.codecrib.com/.../dynamics-ax-2012-compile-times.html

40 minutes. That's right :-)

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 9:18 AM

Inheritance is definitely a big deal, but not the only thing.

For inheritance, try to inherit from RunBase. Compile. Then change the inheritance to inherit from RunBaseBatch instead. Your runbasebatch methods will not show up on your class (in intellisense when you try to use the class from code). The only way to get that class fixed is to go to RunBaseBatch and compile forward on that base class. The compiler won't tell you this though.

As for single object compile... What if you change the name of a table field for example? That table compiles fine and your X++ classes also think it's fine (it may actually run fine if the object IDs didn't change). But as soon as you try to generate CIL, or move the code to another environment and new IDs are assigned, things don't compile in classes referring to the old field names.

CIL generation has complicated matters since it's more strict (which is a good thing) than what we had  before. CIL technically does a full generation every time (don't trust the incremental! there's easy ways to show the incremental is flawed).

These are just a few easy examples (I'm sure there's even more interesting examples with changing properties on tables/fields/services/etc). They show the need for full compiles. Whether you can do the full compile in bits and pieces (back on topic) is a good question. But I wouldn't trust it entirely :-)

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 11:26 AM

If you're implementing a full algorithm for dependencies etc, you should look at some of the startup classes that deal with XPO imports. There's some logic there about recompiling etc. I think it's purely based on compiler output an deciding to keep recompiling certain pieces until there are no more errors or until some threshold is reached.

Reply
Suggested Answer
Maris.Orbidans responded on 18 Mar 2013 6:16 AM

Sorry I could not resist :) Probably I repeat some previous comments:

Ax compilation watching (like birds watching) revealed the Ax compile is performed in sequential manner - little bit SQL, some more AOS, and little bit Ax again in the loop.

It begs for test:

1)  take best CPU for single thread operation. Here is some info about it:

www.cpubenchmark.net/singleThread.html

I would take inexpensive Core i5-3570S from that list

2) take very fast ram ~16-32 GB (I think 32 GB is max for core i5 cpu) and board which allows this ram to operate (overclockers sites can help)

3) do not use virtualisation - undeniably it cause extra overheads

4) run all Ax components on this machine (all components of standard Ax required for compilation feels rather ok starting from 8GB RAM).

Enjoy.

A and would be nice to overclock this thing to 4+ Ghz

A and CIL build (after compilation) is different kind of animal - it uses parallel threads and can employ many cores.

Reply
Suggested Answer
Joris de Gruyter responded on 25 Apr 2013 7:49 AM

Six days, that is crazy. You need to review your setup. I'm about to post a blog entry about this. I got the compile down to 40 minutes =)

Reply
Suggested Answer
Joris de Gruyter responded on 30 Apr 2013 12:09 PM

Hotfix for R2 compiler speed released: blogs.msdn.com/.../ax2012-r2-hotfix-available-improves-compile-speed.aspx

Reply
Suggested Answer
Christian Moen responded on 6 May 2013 3:03 AM

Hi,

I just tested the released compile-optimization support.microsoft.com/.../KBHotfix.aspx, and it was not an amazing improvement.

My test server, VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM with AOS and client (database on a dedicated SQL Server), a full compile was reduced from approx 5 hours down to 3 hours, 40 mins.

An improvement, but still not satisfying.

--------

Christian Moen | Dynamics AX System Engineer, Avanade Norway

Reply
Suggested Answer
Joris de Gruyter responded on 15 May 2013 9:44 AM

Thanks Dan, I was going to come over to your desk and ask for this :-)

Reply
Suggested Answer
Christian Moen responded on 15 May 2013 11:53 AM

Hi all,

an update on my previous post. Still running the following server:

VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM

However installed AOS, client and SQL Server 2012 locally (disabled TCP/IP in the SQL configuration, just used Shared Memory).

Compilation with binary 6.2.1000.413 (not containing the latest compile optimizations) now took 2 hours, 50 minutes.

Based on this, it seems the kernel enhancements are primarily related to the TCP communication between the AOS and database.

Adding 2 more vCPUs, dedicating 2 CPUs to AOS, 2 CPUs to SQL Server and adding some more memory probably will push the compilation time further - at least we are moving in the right direction :)

--------

Christian Moen | Dynamics AX System Engineer, Avanade Norway

Reply
Suggested Answer
Volker Deuss responded on 16 May 2013 2:17 AM

I don't think this was mentioned in here before:

The thing which had the biggest (positive) impact on compile times in our installations here was to change the setting in Control Panel / Hardware and Sound / Power options to "High performance" instead of "Balanced" which seems to be default, even in Windows Server... Chosing this option sounds kind of obvious but it definitely is something you want to check.

Reply
Suggested Answer
Joris de Gruyter responded on 21 May 2013 7:43 AM

Hi Tommy, if the KB only gave you a few extra minutes, it's more likely due to SQL caching than anything else :-)

Reply
Suggested Answer
Joris de Gruyter responded on 8 Nov 2013 5:58 AM

Yes, if you read the documentation on MSDN ( msdn.microsoft.com/.../d6da631b-6a9d-42c0-9ffe-26c5bfb488e3.aspx ) it talkes about issues with COM objects. In fact, prior to CU7 there is a form called SysInetHtmlEditor that has this issue. You need to fix that to get rid of two compile errors, but otherwise it works great.

I blogged about this here: daxmusings.codecrib.com/.../what-you-need-to-know-about-15-minute.html

Reply
Suggested Answer
Daniel Weichsel responded on 8 Nov 2013 6:59 AM

On top of the COM issues mentioned by Joris, you can get locked into the newer kernel version by connecting with a CU7 AOS.  I received this failure on a CU6 environment after having run an AxBuild compile from a CU7 AOS:

Object Server 01:  Fatal SQL condition during login. Error message: "The internal time zone version number stored in the database is higher than the version supported by the kernel (5/4). Use a newer Microsoft Dynamics AX kernel."

I pushed past this by setting the timezone version back (in the SqlSystemVariables table), but ONLY because it was on a test/development environment.  I would not do that for an important system.

Reply
Suggested Answer
Tommy Skaue responded on 9 Nov 2013 7:34 AM

I have tested trying to compile a CU6 modelstore just by pointing the axbuild to a CU6 modelstore, but the tool will ignore this and instead compile the modelstore for the aos configuration setup on the system.

Daniel has tested compiling a CU6 modelstore by changing the AOS configuration, but then the modelstore could not be used using a CU6 kernel without doing a SQL hack.

I have tested the crazy thing as trying to run axbuild from a CU6 AOS bin folder, but they've added additional code to the binaries supporting this new build routine (obviously). ;-)

I am sort of concluding that the best solution is updating all binary components to CU7, all except "database". You will need to go through a proper code upgrade process first, making sure your code and any ISV or VAR code is successfully upgraded to CU7.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply