Skip to main content
Dynamics 365 Community / Blogs / Dynamics 365 FastTrack Blog / Part 3 – Dynamics 365 finan...

Part 3 – Dynamics 365 finance and operations apps performance testing with JMeter - Result Analysis

***** If you come across broken images, please browse this post in private mode *****

Introduction

This blog series has previously provided the knowledge and guidance on how to perform UI performance testing in Finance and Operations apps with JMeter, and the data we collected from the tests. After you completed and ended the performance testing, you can comparing the actual performance results with the expected performance criteria or the baseline data, and you can evaluate whether the system met the performance requirements or not, and identify any performance issues or bottlenecks. However, how do you ensure that the testing result is reliable so we can address the issue based on testing result. In this part, I will discuss the practice to verify the performance testing result.


Here are several suggested approaches to validate the performance testing results:

  1. Ensure key samplers executed successfully within the test plan.
  2. Validate that the response body of key samplers contains expected success data.
  3. Confirm the being of specific workloads within the system.
  4. Verify that data has been accurately created within the system.

These validation steps serve as essential checkpoints to ascertain the reliability and accuracy of the performance testing outcomes.

 

Ensure key samplers executed successfully within the test plan

During JMeter Test plan execution, the primary focus lies in ensuring the successful execution of key samplers without encountering errors. These key samplers typically involve authentication steps, such as POST requests to authentication endpoints like "POST https://login.microsoftonline.com/kmsi" and interactions with finance and operations services, such as "POST https://yourenvironment.sandbox.operations.dynamics.com/Services/ReliableCommunicationManager.svc/ProcessMessages" It is crucial to verify that these key samplers are executed successfully, denoted by a response code of 200.

In my practice, I analyze the testing results using both the Summary Report and View Results Tree. The Summary Report offers a summarizing overview, highlighting the number and types of samples that failed during testing. For more detailed information on request and response, I rely on the View Results Tree, which provides in-depth insights into individual samples, enabling pinpointing of errors.

For example, I want to make sure the multi-user testing is valid before doing large volume testing for making purchase orders. In the dry-run with 10 concurrent users testing, I saw an error in one sample from the 1. Access to application group, with an error rate of 10%. This means that only one of 10 executed samples had an error. Out of 870 samples executed in total, the failure rate is 0.23%. Therefore, based on the Summary Report, it's clear that almost key samplers ran successfully without errors.

 

You can locate the detail of the issue sampler in the View Results Tree by looking for the label name LogIn/conf/v2/d365fo/fpconfig.min.json-67. The sampler result shows that the response code is 503, which means Service Unavailable Error.

 

This is an HTTP response status code that shows your web server works correctly, but it can't process a request right now. According to the information in Results Tree, this request is attempting to receive a response from https://fpc.msedge.net/conf/v2/d365fo/fpconfig.min.json?monitorId=d365fo which is an Internet Analyzer from Azure that does not affect the testing outcome, so we can ignore this error in the testing result.

Therefore, we should expect all samplers to run without errors, but it does not mean Testing failure if there are errors in the result.
 

Validate that the response body of key samplers contains expected success data

One more thing we need to check is the response body from the key request. Sometimes you might see a low failure rate but no data created at the end. As I mentioned we need to make sure key sampler executed successfully, response code should be 200, but we also need to make sure the response body from server is valid so correlated data can be extracted correctly.


For example, before we can interact with finance and operations, we need a security token that has to be retrieved successfully, in the KMSI sampler, it will send a post request to https://login.microsoftonline.com/kmsi with list of parameters that are taken from previous samplers and it should expect a security token in the response body if successful. Once the token is generated successfully, then the token can be captured and used for the next activity.


The following figure shows the response body of the successful KMSI request. It contains the token that has been created and added to the body.

In the provided example, despite receiving a response code of 200, there are issues within the response body. For instance, the message "We received a bad request" is indicative of a problem.

 

This typically suggests that the server has received a request from your client, but it is incomplete, abnormal, or otherwise invalid according to the server's expectations. Such a generic message signifies that there was an issue with the request. This error message is commonly encountered when the server cannot process the request due to syntax errors, missing parameters, or other issues. Consequently, the remainder of the testing script will likely be invalidated due to authentication failure.

When interacting with finance and operations systems, many properties of elements are populated using service operation (ProcessMessages) ReliableCommunicationManager.svc. It is crucial to validate this sampler as it plays a significant role in providing data in XML format, which is then rendered in JSON format on the UI using jQuery. However, if an abnormal return occurs, checking the response body becomes essential. In such cases, you may encounter an empty body or an empty message within the response. Despite the server returning a successful "HTTP Response" code, this doesn't guarantee the validation of the testing result. It's important to note such abnormal return bodies, especially when encountering empty messages.


The following shown an abnormal response body, with the message being empty:

This is normal response body from server:

If any abnormal response body is detected in the response, the testing result should be marked invalid, regardless of whether all test script response codes are 200.

 

Confirm the being of specific workloads within the system

Another aspect we can consider to determine the validity of the testing results is the workload on the system. During single-user or multi-user testing simulations, we anticipate the system to function effectively under a specific workload, including resource usage. We can gather relevant information to confirm that the testing script has been executed correctly from the system's viewpoint. Workload assessment can be conducted in various ways. Since we performed performance testing in the Sandbox environment, you can directly examine the workload on the database or gather telemetry data from LCS.

In the LCS Environment monitoring, we can find the activity timestamp based on the testing windows (for instance, I have tested the PO creation and workflow submission from 09:36 – 09:38). First of all, we can see there are spike from AOS’s active request during performance testing timestamp, which mean there are some loadings send to the application and make the request into AOS, and reflecting in the chart eventually.

 

With 8 AOS and a simulation of 100 concurrent users, operating under the assumption that each AOS can effectively manage around 13 users through load balancing, it's evident that each AOS, utilizing 11 credentials for simulation, handled a minimum load of 1 user. Upon reviewing the User load in the LCS monitoring, we observed 2 user sessions in AOS3, 3 sessions in AOS6, and 1 session in AOS8, aligning well with the current scenario.

 

Given the focus on Purchase order creation and workflow submission scenarios, it's expected that most activities relate to the Purchase module. Examination of the Activity load tab confirms that each AOS is primarily engaged in Purchase order processing tasks such as PurchTable and PurchTableWorkflow actions.

 

 

During testing, the CPU utilization of the AOS has risen, peaking at a maximum of 55%. This increase suggests that the system is becoming busier with incoming workload in the server could be running and consuming CPU resources, leading to the escalation in CPU utilization.

 

The telemetry provided by LCS offers various logs, including User login events, which help confirm the number of users, their actions, and access times during testing, aiding in assessing the success of concurrent user simulations.

Overall, LCS telemetry can be help to confirms a reasonable workload and activity level during performance testing.

 

Verify that data has been accurately created within the system

Data validation is a straightforward process for verifying the validity of testing results, except when performance testing involves reporting or querying data within the system. In such cases, data creation during performance testing becomes essential. In this scenario outlined in the blog post, we simulated 100 concurrent users to create purchase orders and submit them for workflow approval.

By examining the columns "created date and time" and "created by," we can confirm that the data was indeed created during the performance testing, attributed to the testing user. Furthermore, all purchase orders were successfully submitted to the workflow, resulting in an approval status of "In review."

 

Summary

In this blog post, I shared insights on how to validate the data created during UI performance testing of Dynamics 365 Finance and Operations apps. This practice is crucial for ensuring the accuracy and comprehensiveness of testing outcomes. Data validation is highlighted as a key component. In addition to the suggested approach, I encourage exploration of other validation methods tailored to individual testing scenarios.

 

Blog series included:

 

Comments