Personalized Community is here!
Quickly customize your community to find the content you seek.
Latest TechTalk Videos
Have questions on moving to the cloud? Visit the Dynamics 365 Migration Community today! Microsoft’s extensive network of Dynamics AX and Dynamics CRM experts can help.
2023 Release Wave 1Check out the latest updates and new features of Dynamics 365 released from April 2023 through September 2023
The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence.
FastTrack Community | FastTrack Program | Finance and Operations TechTalks | Customer Engagement TechTalks | Upcoming TechTalks | All TechTalks
Over the last few years, we have received a lot of feedback about developer productivity being lost when waiting for the first breakpoint to be hit after changes have been compiled. This time has a noticeable impact on the edit -> compile -> debug cycle that developers use to do their work. I am happy to report that we have made progress on this issue that we have been wrestling with for some time. With the changes described here developers can be more productive, creating more value for their customers more quickly.
I will explain what we have done in more detail below but let me show the numbers right up front. We ran tests for two different scenarios: One running an empty test with the unit test framework, and the other starting a runnable class with an empty Main method. In both situations, we set a breakpoint on the first statement of our test or Main method. We ran the tests on developer virtual machines allocating 2, 8 and 16 cores. All machines have SSD disks. All measurements are mean values from 10 runs and are truncated to the nearest second. The tests only loaded PDB files for the tip assembly (using the “load Symbols only for items in the solution” option in Visual Studio.
As you can see, there is a lot of improvement, cutting down the time to half or a third of the original time. You may notice that there is not a lot of difference between the numbers on the different cores. This is because the operations involved are sequential and cannot exploit more cores. You can also see that it is much cheaper to execute a unit test than writing a runnable class (and this is a gift that keeps on giving, every time you execute your test after making a change in your code). The memory footprint has also improved but is expected to grow to the former level as the system gets warmer, executing more handlers. Debugging is not production runs, so the benefits are obvious.
When the system starts up, it loads the assembly that contains your code. When it does, it will also load all the assemblies that contain the handlers for the events that take place in the code; When we refer to handlers we mean:
The change that we have made is to load the assemblies (and their PDB files, as required) lazily rather than eagerly. Now the assemblies are not loaded until the handlers are needed, i.e., not before the CUD operation happens, the method is called etc. Eventually, when the system is running warm, they will all have been loaded, in practice the developer will see a very significant improvement in the time spent from starting the debugger (by pressing F5) until the first breakpoint is encountered, depending on the scenario.
There is an additional twist to this. As you know, we do not mandate recompiling ISV code when we make changes to our code – Your code is expected to just keep running, irrespective of the changes that we make. With this change, we were not able to make it compatible with your existing code. For this reason, we currently do the lazy loading only for the Microsoft assemblies. This does not materially change the experience for you – It is still much faster!
These changes are available in Platform update 49, coming to your box soon.
Business Applications communities