Skip to main content

New improvements in developer productivity

Background

Over the last few years, we have received a lot of feedback about developer productivity being lost when waiting for the first breakpoint to be hit after changes have been compiled. This time has a noticeable impact on the edit -> compile -> debug cycle that developers use to do their work. I am happy to report that we have made progress on this issue that we have been wrestling with for some time. With the changes described here developers can be more productive, creating more value for their customers more quickly.

 I will explain what we have done in more detail below but let me show the numbers right up front. We ran tests for two different scenarios: One running an empty test with the unit test framework, and the other starting a runnable class with an empty Main method. In both situations, we set a breakpoint on the first statement of our test or Main method. We ran the tests on developer virtual machines allocating 2, 8 and 16 cores. All machines have SSD disks. All measurements are mean values from 10 runs and are truncated to the nearest second. The tests only loaded PDB files for the tip assembly (using the “load Symbols only for items in the solution” option in Visual Studio.

4718.unittest-to-breapoint.png3348.Runnable-class-to-breakpoint.png

7607.Unit-test-memory.png1777.Runnable-class-memory.png

As you can see, there is a lot of improvement, cutting down the time to half or a third of the original time. You may notice that there is not a lot of difference between the numbers on the different cores. This is because the operations involved are sequential and cannot exploit more cores. You can also see that it is much cheaper to execute a unit test than writing a runnable class (and this is a gift that keeps on giving, every time you execute your test after making a change in your code). The memory footprint has also improved but is expected to grow to the former level as the system gets warmer, executing more handlers. Debugging is not production runs, so the benefits are obvious.

How is this achieved?

When the system starts up, it loads the assembly that contains your code. When it does, it will also load all the assemblies that contain the handlers for the events that take place in the code; When we refer to handlers we mean:

  • Data event handlers triggered when CUD operations happen, using the DateEventHandler attribute.
  • Form event handlers, triggered when events happen in the UI, using the FormEventHandler attribute.
  • Form control and data source handlers when data is loaded into controls, using the FormControlEventHandler and FormDataSourceEventHandler attributes.
  • Pre and Post handlers, when methods get called before and after designated methods, using the PreHandlerFor and PostHandlerFor attributes.
  • Handlers for events that are wired up to delegates with the SubscribesTo attribute.
  • Chain of Command handlers, using the ExtensionOf attribute.

The change that we have made is to load the assemblies (and their PDB files, as required) lazily rather than eagerly. Now the assemblies are not loaded until the handlers are needed, i.e., not before the CUD operation happens, the method is called etc. Eventually, when the system is running warm, they will all have been loaded, in practice the developer will see a very significant improvement in the time spent from starting the debugger (by pressing F5) until the first breakpoint is encountered, depending on the scenario.

There is an additional twist to this. As you know, we do not mandate recompiling ISV code when we make changes to our code – Your code is expected to just keep running, irrespective of the changes that we make. With this change, we were not able to make it compatible with your existing code. For this reason, we currently do the lazy loading only for the Microsoft assemblies. This does not materially change the experience for you – It is still much faster!

These changes are available in Platform update 49, coming to your box soon.

Comments

*This post is locked for comments

  • Rudi Hansen Profile Picture Rudi Hansen 3,983
    Posted at
    Was this change supposed to be in 10.0.25? I just did a test of running a hello world class on 10.0.21 that took 66 Seconds, and on 10.0.25 it took 58 Seconds. That does not seem like much of an improvement.
  • Peter Villadsen Profile Picture Peter Villadsen
    Posted at
    trud811 There is no way you can have tried this with the changes mentioned above, since they were so recent that they have not reached the customers yet. Wait until PU49 comes onto your box and do the analysis then.
  • trud811 Profile Picture trud811
    Posted at
    Peter, I just created a video to illustrate the debug speed on the VS2017(it is 10.0.17 version, the latest downloadable VM). The case is exactly what you describe in your post – create a model that references Application Suite, modify and run the class. The time to hit breakpoint was 80 seconds(that includes compile). Here is the link - https://youtu.be/al2-SW_chbw What was your test? If it is the same as mine, 100 seconds is a step back and not the improvement. AOS start time also the pain, it will be great if you provide some improvements in this area. But again - 100 seconds for "hello word" example can't be called developer productivity
  • Peter Villadsen Profile Picture Peter Villadsen
    Posted at
    Thanks for your feedback. There is no need for the double quotation marks around “Improved”: There is no doubt that the results have, in fact, improved, by a large margin. There is no relation between these changes to a version of Visual studio, so your statements about this being better for VS2017 are not accurate. We have been following these times over a long time, and we have only seen minor increases due to the increase of the size of the application code (typically in the application suite). Are you sure you are comparing apples to apples? The times will differ depending on which assemblies are loaded. The measurements in the blog are the worst-case scenario taking a dependency on application suite. I don’t think you can expect the kind of times you mention below. Note, that you are bringing tens of millions of lines of code online when you debug. If you compare this to the experience for debugging ASP.NET with C# I think you will see a similar experience, although it is true that the AOS has to spin up also. The next set of improvements in this area are likely to come from decreasing the time it takes to spin up the AOS.
  • trud811 Profile Picture trud811
    Posted at
    It is good that something is happening in this area but the actual "improved" results look very bad. More than 100 seconds to hit a breakpoint for a runnable class!! For VS2017 it was between 47-90 seconds, so no improvements at all compared to the previous VS version Do you plan to invest more in this? It will be nice to have 3-5 seconds(not 100 as now) Also about the testing results - do you plan to test on the "default" configuration from LCS that is offered for Development type VMs? But thanks again for such insights