Question Status

Verified
Tommy Skaue asked a question on 1 Mar 2013 7:05 AM

I would like to hear others share their experience with possible best practice and recommendations around getting AX 2012 compilation time down to a bare minimum.

We know a X++ compilation involves writing to disk on the AOS server and writing transactions to the model database.

We used to have a Best Practice Whitepaper and Checklist for Dynamics AX 2009, and many of the points are still valid for the next generation Dynamics AX. I still would like to hear from the community what hints and tips they would like to share regarding this specific process; compiling AX 2012.

AX2012 may take 2-3 hours. AX2012 R2 over 5 hours. Can anything be done to reduce the compilation time without "triple" the hardware?

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply
Verified Answer
Joris de Gruyter responded on 25 Apr 2013 8:35 AM

Alright, I just posted it. Read all about it here: daxmusings.codecrib.com/.../dynamics-ax-2012-compile-times.html

40 minutes. That's right :-)

Reply
Nicholas Peterson responded on 25 Apr 2013 8:46 AM

nice info Joris, just out of curiosity, could you try running the test on the same system using tcpip instead of shared memory? I'd be curious to know the porformance hit incurred from just that setting alone.

Reply
Joris de Gruyter responded on 25 Apr 2013 9:10 AM

Yes I've been thinking about doing a few things like that. Also trying to clone the disk contents on a regular 7200RPM disk to see the impact of the SSD.

I'm a little hesitant to do it now, since my team is now very excited to use this machine for builds lol. I will see if i can try over the weekend perhaps!

Reply
Andreas Rudischhauser responded on 25 Apr 2013 11:30 AM

Joris, WTF :)

That was an awesome idea to turn of tcp/ip.

I really need to check that.

Reply
Kevin Kidder responded on 26 Apr 2013 7:43 AM

FYI on a thread posted by the Microsoft AX Tools Development team on this long time compile issue - and note that a hotfix will be released which will help for R2 in a short period of time:

blogs.msdn.com/.../compile-oh-no.aspx

Reply
Joris de Gruyter responded on 26 Apr 2013 9:00 AM

Also, someone on my blog noted that using shared memory instead of tcp/ip reduced his compile time from 4h to 3.5h so it does make a difference. In talking with the Microsoft compiler team about this, their tests have shown that SQL is smart enough to use the shared memory over tcp/ip are both available. So it sounds like you don't have to turn off tcp/ip necessarily. Not sure if the version of SQL matters on this "smart" decision of protocols.

Reply
Dick Wenning responded on 29 Apr 2013 1:47 PM

please mark reply from  Peter Villadsen as verified answer

Kind regards, 

Kaya Solutions

Dick Wenning

+31 6 147 989 53 

Landjuweel 5

3905 PE - Veenendaal

 

OTHER CONTACT INFORMATION

Reply
Suggested Answer
Joris de Gruyter responded on 30 Apr 2013 12:09 PM

Hotfix for R2 compiler speed released: blogs.msdn.com/.../ax2012-r2-hotfix-available-improves-compile-speed.aspx

Reply
Suggested Answer
Christian Moen responded on 6 May 2013 3:03 AM

Hi,

I just tested the released compile-optimization support.microsoft.com/.../KBHotfix.aspx, and it was not an amazing improvement.

My test server, VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM with AOS and client (database on a dedicated SQL Server), a full compile was reduced from approx 5 hours down to 3 hours, 40 mins.

An improvement, but still not satisfying.

--------

Christian Moen | Dynamics AX System Engineer, Avanade Norway

Reply
Hans-Petter Lund responded on 13 May 2013 12:32 PM

Just a short update to my previous comments in this thread - did a new install on the production platform (Hyper-V) and with WinSrv2012 Standard as OS with AX 2012 R2 CU1 slipstreamed > just under 3 hours for the x++ compilation during the initial check list.

I think that's a reasonable (acceptable) compilation time given the footprint of AX 2012 R2 and in a virtual implementation. I can post the Performance Monitor collection taken (CPU) if it's of interest to the community. Maybe I'll manage to test the effect of the newly released hot fix too (can't promise due to a tight schedule).

All in all this thread should provide the community with the information needed to discuss pros and cons (physical, virtual, importance of CPU frequency, storage etc.).

Reply
AXBench responded on 15 May 2013 7:00 AM

Perfect Joris,

I've setup a specific hardware as well just to compile R2. You can check numbers here. This is compiling RAW AX 2012 R2.

http://axbenchmark.github.com

Feel free to send me pull requests on github adding your hardware and your compile times

Thanks

Reply
Joris de Gruyter responded on 15 May 2013 7:48 AM

Do you have the R2 compile hotfix installed for your benchmark?

I just compiled R2 CU1 (6.2.1000.156) including some custom code on an older i7 Q720 laptop (1.6Ghz only) and it was 3hrs, 5 mins. Without the compiler hotfix, because I want to compare the difference :-)

Reply
AXBench responded on 15 May 2013 8:32 AM

It's a great time!!!

That compilation time was for AX 2012 R2 without any CU1. That hardware was cheap, it cost me total $700 (HP desktop server + SSD)

I've just went up to 6.2.1000.1013 with the compile hotfix will test it. I didn't read your former posts, don't know if there's already a small job or class but did you create code to actually time it? We could then do that and share on the seconds.

I also have no models other than some small modifications. I also turned off some license codes.

Reply
Joris de Gruyter responded on 15 May 2013 9:14 AM

I time it because I run this build automated using TFS, and TFS times everything. You can put some code in the AOT though to get an accurate count, I'll have to check where, we've done it before.

Reply
Daniel Weichsel responded on 15 May 2013 9:26 AM

I've captured timings of compiles by adding some code to the SysCompileAll::main method in the testing environment, then running the compile from the menu item in System Administration or launching the class directly.

    System.Diagnostics.Stopwatch    stopWatch;
    System.TimeSpan                 elapsed;

    if (SysCompileAll::prompt())
    {
        stopWatch = new System.Diagnostics.Stopwatch();
        stopWatch.Start();

        infolog.messageWin().activate();

        SysCompileAll::flushClient();
        SysCompileAll::compile();

        SysCheckList::finished(classnum(SysCheckListItem_Compile));
        SysCheckList::finished(classnum(SysCheckListItem_CompileUpgrade));
        SysCheckList::finished(className2Id(classStr(SysCheckListItem_CompileServ)));

        stopWatch.Stop();
        elapsed = stopWatch.get_Elapsed();

        info(strFmt("Elapsed time: %1", CLRInterop::getAnyTypeForObject(elapsed.ToString())));
    }

Reply
Verified Answer
Joris de Gruyter responded on 1 Mar 2013 1:57 PM

It's a big issue indeed. As shown there's some ways to shave off some time, but in the grand scheme of things it's not a whole lot.

Problem is really the X++ compiler, it simply doesn't scale. Notice how it uses just 1 core, and not even at 100%. Maybe with superfast SQL and disk it may use 100%, of one core only. Remember they started developing AX originally in the early 80s I believe, so 8086 type stuff with the memory restrictions involved, and the first release of AX came out when the first pentiums were brand new. The architecture of the compiler hasn't really changed since then.

Bottom line, it doesn't scale so doesn't matter a whole lot how much hardware you throw at it. Problem is the legacy history of the X++ compiler versus the amount of new technology and code they've added to 2012 and R2.

Reply
Verified Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Verified Answer
Mark Veurink responded on 14 Mar 2013 5:47 AM

We tested several different scenarios, and we found out that the following is best practice:

- Install AOS en SQL on the same box (splitting over two boxes will double compile time)

- Use latest CPU architecture with 2x vCPU (1x vCPU will double time, the same to older CPU)

We timed a full compile of AX2012R2 on a VMware system with 2x vCPU Intel XEON E5-2690 2.9Ghz in 2 hour and 05 minutes. The same compile on the same Virtual Machine on an older XEON 5650 used up to 4 hour and 35 minutes. This means that using the latest CPU architecture will speed things up quite good.

Although... I personally find it really strange Microsoft is not putting more effort to this problem. Not all customers are running "the latest and greatest" hardware, and placing a hotfix in the production environment (we of course hope we never have to do this but..) is getting a more and more daunting task.

tip!

If the customer does not want to buy more recent hardware, let him buy a latest Core i7 workstation. Use this workstation (install AOS and SQL on this machine) only for full compile and full CIL actions.

Reply
Verified Answer
Joris de Gruyter responded on 25 Apr 2013 8:35 AM

Alright, I just posted it. Read all about it here: daxmusings.codecrib.com/.../dynamics-ax-2012-compile-times.html

40 minutes. That's right :-)

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Peter Villadsen responded on 4 Mar 2013 4:31 PM

The current X++ compiler does not scale very well with increased load of source code.

This is due basically to its old architecture, and the fact that it interfaces with a metadata provider that does not perform very well either. You will not get a lot of benefits from increasing the speed of your CPU, since the CPU is not maxed out anyway: The compiler spends some time waiting for metadata to be fetched from the server.

Even though the hardware the client is running on may be 64 bits, the client application (where the compilations happen) are still 32 bits. This sets a few restrictions on how much memory can be used in the compiler as well as the speed.

The best thing you can do is to make sure the metadata does not have to travel too far: In other words, try running the client on the server box (if possible: There may be many reasons you may not be allowed to do this).

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 9:18 AM

Inheritance is definitely a big deal, but not the only thing.

For inheritance, try to inherit from RunBase. Compile. Then change the inheritance to inherit from RunBaseBatch instead. Your runbasebatch methods will not show up on your class (in intellisense when you try to use the class from code). The only way to get that class fixed is to go to RunBaseBatch and compile forward on that base class. The compiler won't tell you this though.

As for single object compile... What if you change the name of a table field for example? That table compiles fine and your X++ classes also think it's fine (it may actually run fine if the object IDs didn't change). But as soon as you try to generate CIL, or move the code to another environment and new IDs are assigned, things don't compile in classes referring to the old field names.

CIL generation has complicated matters since it's more strict (which is a good thing) than what we had  before. CIL technically does a full generation every time (don't trust the incremental! there's easy ways to show the incremental is flawed).

These are just a few easy examples (I'm sure there's even more interesting examples with changing properties on tables/fields/services/etc). They show the need for full compiles. Whether you can do the full compile in bits and pieces (back on topic) is a good question. But I wouldn't trust it entirely :-)

Reply
Suggested Answer
Joris de Gruyter responded on 13 Mar 2013 11:26 AM

If you're implementing a full algorithm for dependencies etc, you should look at some of the startup classes that deal with XPO imports. There's some logic there about recompiling etc. I think it's purely based on compiler output an deciding to keep recompiling certain pieces until there are no more errors or until some threshold is reached.

Reply
Suggested Answer
Maris.Orbidans responded on 18 Mar 2013 6:16 AM

Sorry I could not resist :) Probably I repeat some previous comments:

Ax compilation watching (like birds watching) revealed the Ax compile is performed in sequential manner - little bit SQL, some more AOS, and little bit Ax again in the loop.

It begs for test:

1)  take best CPU for single thread operation. Here is some info about it:

www.cpubenchmark.net/singleThread.html

I would take inexpensive Core i5-3570S from that list

2) take very fast ram ~16-32 GB (I think 32 GB is max for core i5 cpu) and board which allows this ram to operate (overclockers sites can help)

3) do not use virtualisation - undeniably it cause extra overheads

4) run all Ax components on this machine (all components of standard Ax required for compilation feels rather ok starting from 8GB RAM).

Enjoy.

A and would be nice to overclock this thing to 4+ Ghz

A and CIL build (after compilation) is different kind of animal - it uses parallel threads and can employ many cores.

Reply
Suggested Answer
Joris de Gruyter responded on 25 Apr 2013 7:49 AM

Six days, that is crazy. You need to review your setup. I'm about to post a blog entry about this. I got the compile down to 40 minutes =)

Reply
Suggested Answer
Joris de Gruyter responded on 30 Apr 2013 12:09 PM

Hotfix for R2 compiler speed released: blogs.msdn.com/.../ax2012-r2-hotfix-available-improves-compile-speed.aspx

Reply
Suggested Answer
Christian Moen responded on 6 May 2013 3:03 AM

Hi,

I just tested the released compile-optimization support.microsoft.com/.../KBHotfix.aspx, and it was not an amazing improvement.

My test server, VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM with AOS and client (database on a dedicated SQL Server), a full compile was reduced from approx 5 hours down to 3 hours, 40 mins.

An improvement, but still not satisfying.

--------

Christian Moen | Dynamics AX System Engineer, Avanade Norway

Reply
Suggested Answer
Joris de Gruyter responded on 15 May 2013 9:44 AM

Thanks Dan, I was going to come over to your desk and ask for this :-)

Reply
Suggested Answer
Christian Moen responded on 15 May 2013 11:53 AM

Hi all,

an update on my previous post. Still running the following server:

VMWare with 2 vCPU (Intel X5670 @ 2,93 GHz), 8 GB RAM

However installed AOS, client and SQL Server 2012 locally (disabled TCP/IP in the SQL configuration, just used Shared Memory).

Compilation with binary 6.2.1000.413 (not containing the latest compile optimizations) now took 2 hours, 50 minutes.

Based on this, it seems the kernel enhancements are primarily related to the TCP communication between the AOS and database.

Adding 2 more vCPUs, dedicating 2 CPUs to AOS, 2 CPUs to SQL Server and adding some more memory probably will push the compilation time further - at least we are moving in the right direction :)

--------

Christian Moen | Dynamics AX System Engineer, Avanade Norway

Reply
Suggested Answer
Volker Deuss responded on 16 May 2013 2:17 AM

I don't think this was mentioned in here before:

The thing which had the biggest (positive) impact on compile times in our installations here was to change the setting in Control Panel / Hardware and Sound / Power options to "High performance" instead of "Balanced" which seems to be default, even in Windows Server... Chosing this option sounds kind of obvious but it definitely is something you want to check.

Reply
Suggested Answer
Joris de Gruyter responded on 21 May 2013 7:43 AM

Hi Tommy, if the KB only gave you a few extra minutes, it's more likely due to SQL caching than anything else :-)

Reply
Suggested Answer
Joris de Gruyter responded on 8 Nov 2013 5:58 AM

Yes, if you read the documentation on MSDN ( msdn.microsoft.com/.../d6da631b-6a9d-42c0-9ffe-26c5bfb488e3.aspx ) it talkes about issues with COM objects. In fact, prior to CU7 there is a form called SysInetHtmlEditor that has this issue. You need to fix that to get rid of two compile errors, but otherwise it works great.

I blogged about this here: daxmusings.codecrib.com/.../what-you-need-to-know-about-15-minute.html

Reply
Suggested Answer
Daniel Weichsel responded on 8 Nov 2013 6:59 AM

On top of the COM issues mentioned by Joris, you can get locked into the newer kernel version by connecting with a CU7 AOS.  I received this failure on a CU6 environment after having run an AxBuild compile from a CU7 AOS:

Object Server 01:  Fatal SQL condition during login. Error message: "The internal time zone version number stored in the database is higher than the version supported by the kernel (5/4). Use a newer Microsoft Dynamics AX kernel."

I pushed past this by setting the timezone version back (in the SqlSystemVariables table), but ONLY because it was on a test/development environment.  I would not do that for an important system.

Reply
Suggested Answer
Tommy Skaue responded on 9 Nov 2013 7:34 AM

I have tested trying to compile a CU6 modelstore just by pointing the axbuild to a CU6 modelstore, but the tool will ignore this and instead compile the modelstore for the aos configuration setup on the system.

Daniel has tested compiling a CU6 modelstore by changing the AOS configuration, but then the modelstore could not be used using a CU6 kernel without doing a SQL hack.

I have tested the crazy thing as trying to run axbuild from a CU6 AOS bin folder, but they've added additional code to the binaries supporting this new build routine (obviously). ;-)

I am sort of concluding that the best solution is updating all binary components to CU7, all except "database". You will need to go through a proper code upgrade process first, making sure your code and any ISV or VAR code is successfully upgraded to CU7.

Tommy Skaue | Dynamics AX Developer from Norway | http://yetanotherdynamicsaxblog.blogspot.no/ | www.axdata.no

Reply