Question Status

Verified
Tommy Skaue asked a question on 1 Mar 2013 7:05 AM

I would like to hear others share their experience with possible best practice and recommendations around getting AX 2012 compilation time down to a bare minimum.

We know a X++ compilation involves writing to disk on the AOS server and writing transactions to the model database.

We used to have a Best Practice Whitepaper and Checklist for Dynamics AX 2009, and many of the points are still valid for the next generation Dynamics AX. I still would like to hear from the community what hints and tips they would like to share regarding this specific process; compiling AX 2012.

AX2012 may take 2-3 hours. AX2012 R2 over 5 hours. Can anything be done to reduce the compilation time without "triple" the hardware?

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Verified Answer
Joris de Gruyter responded on 1 Mar 2013 1:57 PM

It's a big issue indeed. As shown there's some ways to shave off some time, but in the grand scheme of things it's not a whole lot.

Problem is really the X++ compiler, it simply doesn't scale. Notice how it uses just 1 core, and not even at 100%. Maybe with superfast SQL and disk it may use 100%, of one core only. Remember they started developing AX originally in the early 80s I believe, so 8086 type stuff with the memory restrictions involved, and the first release of AX came out when the first pentiums were brand new. The architecture of the compiler hasn't really changed since then.

Bottom line, it doesn't scale so doesn't matter a whole lot how much hardware you throw at it. Problem is the legacy history of the X++ compiler versus the amount of new technology and code they've added to 2012 and R2.

Reply
Verified Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Verified Answer
Mark Veurink responded on 14 Mar 2013 5:47 AM

We tested several different scenarios, and we found out that the following is best practice:

- Install AOS en SQL on the same box (splitting over two boxes will double compile time)

- Use latest CPU architecture with 2x vCPU (1x vCPU will double time, the same to older CPU)

We timed a full compile of AX2012R2 on a VMware system with 2x vCPU Intel XEON E5-2690 2.9Ghz in 2 hour and 05 minutes. The same compile on the same Virtual Machine on an older XEON 5650 used up to 4 hour and 35 minutes. This means that using the latest CPU architecture will speed things up quite good.

Although... I personally find it really strange Microsoft is not putting more effort to this problem. Not all customers are running "the latest and greatest" hardware, and placing a hotfix in the production environment (we of course hope we never have to do this but..) is getting a more and more daunting task.

tip!

If the customer does not want to buy more recent hardware, let him buy a latest Core i7 workstation. Use this workstation (install AOS and SQL on this machine) only for full compile and full CIL actions.

Reply
Verified Answer
Joris de Gruyter responded on 25 Apr 2013 8:35 AM

Alright, I just posted it. Read all about it here: daxmusings.codecrib.com/.../dynamics-ax-2012-compile-times.html

40 minutes. That's right :-)

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 9:18 AM

Inheritance is definitely a big deal, but not the only thing.

For inheritance, try to inherit from RunBase. Compile. Then change the inheritance to inherit from RunBaseBatch instead. Your runbasebatch methods will not show up on your class (in intellisense when you try to use the class from code). The only way to get that class fixed is to go to RunBaseBatch and compile forward on that base class. The compiler won't tell you this though.

As for single object compile... What if you change the name of a table field for example? That table compiles fine and your X++ classes also think it's fine (it may actually run fine if the object IDs didn't change). But as soon as you try to generate CIL, or move the code to another environment and new IDs are assigned, things don't compile in classes referring to the old field names.

CIL generation has complicated matters since it's more strict (which is a good thing) than what we had  before. CIL technically does a full generation every time (don't trust the incremental! there's easy ways to show the incremental is flawed).

These are just a few easy examples (I'm sure there's even more interesting examples with changing properties on tables/fields/services/etc). They show the need for full compiles. Whether you can do the full compile in bits and pieces (back on topic) is a good question. But I wouldn't trust it entirely :-)

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 11:26 AM

If you're implementing a full algorithm for dependencies etc, you should look at some of the startup classes that deal with XPO imports. There's some logic there about recompiling etc. I think it's purely based on compiler output an deciding to keep recompiling certain pieces until there are no more errors or until some threshold is reached.

Reply
Suggested Answer
Maris.Orbidans responded on 18 Mar 2013 6:16 AM

Sorry I could not resist :) Probably I repeat some previous comments:

Ax compilation watching (like birds watching) revealed the Ax compile is performed in sequential manner - little bit SQL, some more AOS, and little bit Ax again in the loop.

It begs for test:

1)  take best CPU for single thread operation. Here is some info about it:

www.cpubenchmark.net/singleThread.html

I would take inexpensive Core i5-3570S from that list

2) take very fast ram ~16-32 GB (I think 32 GB is max for core i5 cpu) and board which allows this ram to operate (overclockers sites can help)

3) do not use virtualisation - undeniably it cause extra overheads

4) run all Ax components on this machine (all components of standard Ax required for compilation feels rather ok starting from 8GB RAM).

Enjoy.

A and would be nice to overclock this thing to 4+ Ghz

A and CIL build (after compilation) is different kind of animal - it uses parallel threads and can employ many cores.

Reply
Suggested Answer
Joris de Gruyter responded on 25 Apr 2013 7:49 AM

Six days, that is crazy. You need to review your setup. I'm about to post a blog entry about this. I got the compile down to 40 minutes =)

Reply
Suggested Answer
Joris de Gruyter responded on 30 Apr 2013 12:09 PM

Hotfix for R2 compiler speed released: blogs.msdn.com/.../ax2012-r2-hotfix-available-improves-compile-speed.aspx

Reply
Suggested Answer
Christian Moen responded on 6 May 2013 3:03 AM

Hi,

I just tested the released compile-optimization support.microsoft.com/.../KBHotfix.aspx, and it was not an amazing improvement.

My test server, VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM with AOS and client (database on a dedicated SQL Server), a full compile was reduced from approx 5 hours down to 3 hours, 40 mins.

An improvement, but still not satisfying.

--------

Christian Moen | Dynamics AX System Engineer, Avanade Norway

Reply
Suggested Answer
Joris de Gruyter responded on 15 May 2013 9:44 AM

Thanks Dan, I was going to come over to your desk and ask for this :-)

Reply
Suggested Answer
Christian Moen responded on 15 May 2013 11:53 AM

Hi all,

an update on my previous post. Still running the following server:

VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM

However installed AOS, client and SQL Server 2012 locally (disabled TCP/IP in the SQL configuration, just used Shared Memory).

Compilation with binary 6.2.1000.413 (not containing the latest compile optimizations) now took 2 hours, 50 minutes.

Based on this, it seems the kernel enhancements are primarily related to the TCP communication between the AOS and database.

Adding 2 more vCPUs, dedicating 2 CPUs to AOS, 2 CPUs to SQL Server and adding some more memory probably will push the compilation time further - at least we are moving in the right direction :)

--------

Christian Moen | Dynamics AX System Engineer, Avanade Norway

Reply
Suggested Answer
Volker Deuss responded on 16 May 2013 2:17 AM

I don't think this was mentioned in here before:

The thing which had the biggest (positive) impact on compile times in our installations here was to change the setting in Control Panel / Hardware and Sound / Power options to "High performance" instead of "Balanced" which seems to be default, even in Windows Server... Chosing this option sounds kind of obvious but it definitely is something you want to check.

Reply
Suggested Answer
Joris de Gruyter responded on 21 May 2013 7:43 AM

Hi Tommy, if the KB only gave you a few extra minutes, it's more likely due to SQL caching than anything else :-)

Reply
Suggested Answer
Joris de Gruyter responded on 8 Nov 2013 5:58 AM

Yes, if you read the documentation on MSDN ( msdn.microsoft.com/.../d6da631b-6a9d-42c0-9ffe-26c5bfb488e3.aspx ) it talkes about issues with COM objects. In fact, prior to CU7 there is a form called SysInetHtmlEditor that has this issue. You need to fix that to get rid of two compile errors, but otherwise it works great.

I blogged about this here: daxmusings.codecrib.com/.../what-you-need-to-know-about-15-minute.html

Reply
Suggested Answer
Daniel Weichsel responded on 8 Nov 2013 6:59 AM

On top of the COM issues mentioned by Joris, you can get locked into the newer kernel version by connecting with a CU7 AOS.  I received this failure on a CU6 environment after having run an AxBuild compile from a CU7 AOS:

Object Server 01:  Fatal SQL condition during login. Error message: "The internal time zone version number stored in the database is higher than the version supported by the kernel (5/4). Use a newer Microsoft Dynamics AX kernel."

I pushed past this by setting the timezone version back (in the SqlSystemVariables table), but ONLY because it was on a test/development environment.  I would not do that for an important system.

Reply
Suggested Answer
Tommy Skaue responded on 9 Nov 2013 7:34 AM

I have tested trying to compile a CU6 modelstore just by pointing the axbuild to a CU6 modelstore, but the tool will ignore this and instead compile the modelstore for the aos configuration setup on the system.

Daniel has tested compiling a CU6 modelstore by changing the AOS configuration, but then the modelstore could not be used using a CU6 kernel without doing a SQL hack.

I have tested the crazy thing as trying to run axbuild from a CU6 AOS bin folder, but they've added additional code to the binaries supporting this new build routine (obviously). ;-)

I am sort of concluding that the best solution is updating all binary components to CU7, all except "database". You will need to go through a proper code upgrade process first, making sure your code and any ISV or VAR code is successfully upgraded to CU7.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Tommy Skaue responded on 5 Mar 2013 12:48 AM

Thanks for stepping in, Peter and Joris. I'm glad you are able to share your insights even given your NDAs. :-)

The notion "keep your metadata close" is quite interesting. I would like to see an overview of the "moving parts" or components involved here. I know some people claim having a SQL Server instance on the same machine as the AOS improves compilation performance, but that assumes the same machine is capable of handling all components involved (AOS and SQL Server in one box). If the "metadata" are best collected from in-memory on the AOS, having the SQL Server on the same box wouldn't help necessarily.  

I would like to mark Peter's and Joris' answers as "answer", but I can't help feeling there is more to be said. :-)

(fingers crossed).

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Reuben Gathright responded on 5 Mar 2013 9:37 AM

This is a fascinating discussion because our AX consultants and Microsoft support keep telling me that running the AOS and SQL Server on the same box are ill advised.  

I guess in the "right situation" it is ideal.  

Several members of our IT department (we are not consultants but rather a company that develops its own software to avoid consulting fees) will be attending Convergence 2013 in New Orleans.  We will be bringing a printed form of this discussion with us to see if anyone from Microsoft has an opinion.

Reply
Tommy Skaue responded on 5 Mar 2013 9:52 AM

Well, I haven't yet concluded that having the SQL Server on the same box is advisable. SQL Server will have it's own memory space, independently of the AOS and the AX Client. Default settings when installing an SQL Server instance is that it will allocate its memory space as best it can, but it will not release any memory it grabs unless it is restarted. I believe the AX2012 AOS does the same unless you change the GC settings (someone correct me if I'm wrong). Sure, if you have enough RAM, this won't be a problem. Having SQL Server on the same box eradicates any "network latency" if that would prove to be an issue.

So far, it seems to me the bottleneck most likely never is the network latency, but rather a combination of CPU and RAM, allocates primarily to the AOS and the Client.

In many scenarios and perhaps in general, having SQL Server on the same box as the AOS is bad, but we are looking for an optimal build scenario, shaving off them precious minutes or perhaps hours of the total compilation time.

Quick CPU, enough RAM, quick access to Metadata and removing any unnecessary chit-chat (warnings and errors) seems to be the highlights.

I find it a very fascinating discussion, indeed. ;-)

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Tommy Skaue responded on 6 Mar 2013 2:35 AM

I have marked both the answers from Joris and Peter as "answers", but feel free to contribute if anyone else has additional insights. :-)

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Xander Vink responded on 6 Mar 2013 3:45 AM

but it will not release any memory it grabs unless it is restarted. I believe the AX2012 AOS does the same unless you change the GC settings (someone correct me if I'm wrong).

Maybe the hotfix mentioned in the link below fixes that problem?

kaya-systems.com/ax2012-memory-leak

Reply
Andreas Rudischhauser responded on 6 Mar 2013 4:32 AM

isn't this hotfix included in r2?

Reply
Tommy Skaue responded on 6 Mar 2013 4:37 AM

It is. But memory leaks to me is a different issue. They are memory consumption that is due to a flaw/bug and not by-design.

The AOS will by-design not release allocated memory if it can avoid it by default. This is expected behavior and something we need to take into consideration. Obviously, on a environment with little system memory, the OS will need to shift things around and use the swap-file. I once saw SQL Server and AX 2012 running on same box with 4GB allocated RAM. It just didn't work at all having SQL Server and AX fight for the RAM. ;-)

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Xander Vink responded on 6 Mar 2013 5:57 AM

You can limit the max. amount of memory usage of the AOS by applying the MaxMemLoad command, as specified here technet.microsoft.com/.../aa569637

The same can be done for SQL by specifying the max. amount of memory.

Reply
Xander Vink responded on 6 Mar 2013 7:04 AM

Although that particular command (MaxMemLoad) doesn't seem to work very well....

Back on topic about how to get AX to compile faster:

Would it be possible and, if so, faster to split the full compile into smaller compiles of tables, classes, forms, etc and run these each in their own client session?

Reply
Tommy Skaue responded on 6 Mar 2013 7:18 AM

Well, the order of how the elements are compiled is important, but more often it will need several runs before all the dependent elements are compiled correctly. So the short and safe answer is no. Besides, you also need to be sure CIL compiles, and this step is AX2012 specific and crucial for success. I've seen various examples of X++ compile but CIL not compiling due to inconsistency between X++ and managed code. The only way to catch those are if the AOT is fully compiled first.

My best bet is to have a build server with scripts and setup predefined for the most optimal build performance. It would  be cool to run a poll here and see what compile times people have. I reckon most people will have no less than 4 hours compile on AX2012 R2 and maybe 2,5 hours on AX2012.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Ax2009Tech responded on 7 Mar 2013 4:40 AM

Based on ONE test performed on a pre production high end Hyper-V platform and SQL Server 2012 SP1 CU2 Enterprise (32 Gb reserved vRAM) for AX 2012 R2 GA/RTM, I was able to cut down compilation time with approxemately one hour when running the compilation from the SQL Server with locally installed AOS instance and client, compared to running from a dedicated AOS server. So this single test supports the suggestions from mr.Villadsen.

BTW - A very interesting thread that I would like to contribute to despite beeing marked as answered :-)

Best regards

AX2009Tech aka Back2AX, Norway

http://ax2012tech.blogspot.com

http://ax2009tech.blogspot.com

Reply
Tommy Skaue responded on 8 Mar 2013 12:19 AM

Thanks, AX2009Tech.

A couple of questions.

Are you saying moving the AOS to SQL made a difference, or using the client on the same "box" as the AOS?

Or simply all on the same box made it go faster?

One our less compilation time. What was the total time, if you don't mind me asking?

What is your Best Practice Compilation Level set to?

:-)

I will do some tests on my own soon. Just need to prepare a server in-between everything.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Ax2009Tech responded on 9 Mar 2013 5:21 AM

Skaue,

AOS and Client installed on the SQL Server (all on the same box), and compilation ran from the SQL Server. Basically I only use this AOS instance when compiling and synchronizing to avoid network communication. I don't have the exact figures available on my private PC, but it went down from around 4,5 hours (run on of the AOS servers normally used in this environment) to 3,5 hours. BP = default.

As mentioned in the first reply, this verifies what Mr.Villadsen has explained (a very trustworthy source in this context).

Best regards

AX2009Tech aka Back2AX, Norway

http://ax2012tech.blogspot.com

http://ax2009tech.blogspot.com

Reply
Tom Ghesquiere responded on 13 Mar 2013 4:08 AM

Hi there,

Funny that you ask this question right now, as we've just decided to try to parallellise the compiler. Basically what we do, is start-up 8 Ax-clients and let them all compile part of the AOT. Currently we still have some issues with the Visual Studio projects (they give strange compile errors), so we placed them out-of-scope for now. I'm not even sure whether this is really necessary.

So the normal full compile also takes us 4 hours, The first tests with the parallellised compiler (without the VS-projects) takes us 30 min, on a 3-tier configuration (3 different boxes), virtualized 2-core AOS-server.

So these results are hopefull, but we haven't tested it out yet in production environments, so I'm not really sure whether my compiler does exactly the same as the normal one. I'll keep you posted

Reply
Joris de Gruyter responded on 13 Mar 2013 8:39 AM

That's an interesting approach, but not very reliable. You obviously compile each object once, but there is a reason why the full compile does 4 passes over all objects. You need to be able to resolve dependencies correctly. Maybe you can run your process 4 times to simulate the 4 pass compile across all objects, which I guess would put you at 4x 30 mins which is still only 2 hours.

I don't know. It's an interesting idea, but I'm a little hard pressed to trust it 100%

Reply