Hello,
This weekend I was flabbergasted after a long debugging run. There were performance issues (tell me something new) but this time it was actually interesting and not DB Related. Our un-optimized test environment loaded with a database copy of production was outperforming production by a factor 3 in a certain scenario. Hence the start of my research. And here it was that found the following:
We have 2 production AOS's. One runs batches/connectors and one is for the regular AX clients. I have developed a small class/method which is called by the business connector (or WCF.. but that is generally much slower). The method can be called from X++, or from IL (static runClassMethodIL wrapper on the same method).
Some results (miliseconds). These results are always the same and consistent when ran in a loop.
|
PROD-1 |
PROD-2 |
| Business connector X++ |
150 |
80 |
| Business connector IL |
80 |
220 |
| AOT-job with equal database requests |
25 |
25 |
As you can see, one server runs fast in IL, and the other prefers X++. The results are the same when I run the method from an AOT-job. Obviously this is a situation that is 1) strange, 2) unwanted and 3) impossible to optimize against. So I would like to understand what is happening and possibly get them the same.
Some background on the environment:
AX 2012 R2 CU6
Both servers have 100% equal hardware and are dedicated machines (not virtual)
Both servers have the same AX installation (did a binary compare to verify this)
Both servers have the same latency and connectivity to the database (as can be seen from the 'AOT-job with equal database requests'.
Both servers are treated equally during model deployments when it comes to cleanup etc. full AOT and CIL compile is done before deployment.
Does anyone have an idea or suggestions of which direction I should be looking into? Note that I am not looking to optimize the method in question, but I want to understand and possibly fix the inconsistency between these two servers.
Any help is appreciated!