With the availability of the Acceptance Test Library (ATL) let me share some guidelines for writing tests. As you embark on writing tests, these will help you steer the course and get most out of the investment. 

We write unit tests to:

  • Detect regressions automatically
    Automated tests allows the engineer to detect unintentional behavior with a minimum of effort.
  • Facilitate changes 
    Automated tests allows the engineer to refactor code at a later date, and make sure everything still works correctly.
  • Get better designs and interfaces
    A great bi-product of Test-Driven-Development.
  • Document behavior and interfaces
    Automated tests documents how the system behaves and how engineers should interact with the system. This documentation is never stale.

Unfortunately automated testing doesn't by nature give us all these benefits. Authoring tests is a discipline by itself, and should be approached with the same level of professionalism as when authoring production quality code.  For automated tests I'll discuss 5 qualities (and their anti-qualities). If you manage to write tests that adhere to these qualities, you'll be much closer to the goal. 

Test qualities

Test anti-qualities

Reliability

Tests produce the same result when run repeatedly. Like running into a brickwall.

If your tests are not reliable, they are worthless. If engineers even consider that a failing test is not related to their changes but something else, then the test is counterproductive and is better being eliminated. Fix it or delete it. If not, engineers will waste time investigation, grow numb and ignore actual regressions.

Tests can become unreliable when they are leaking state. These can be hard to track, as the leaking test typically passes, but a downstream test fails. In X++ there are usually just two types of leakage: Database and In-memory state.  The test execution framework does a lot to reset states between tests, this included restoring a snapshot of the database between each test method execution, and clearing global caches.

A common cause of test flakiness is when the test uses a single class instance for several test methods. This means that any class level members keep their values, and thus can impact downstream tests in the same class. Sometimes a downstream test even depends on the leaked state, making it fail when run individually. The test methods must be able to run in any sequence, and individually.

[SysTestCaseUseSingleInstance]
public class MyTestClass extends SysTestCase
{ }

Another common source of flakiness is when expecting time to stand still. The following test will fail whenever a second elapses between the first line and the actual insert.

[SysTestMethod]
public void transactionCreatedDateTimeIsRight()
{
    var expectedTime = DateTimeUtil::getSystemDateTime();
    transaction.insert();
    this.assertEqual(expectedTime, transaction.createdDateTime, "Transaction time is wrong");
}

Maintainability

Tests require little or no effort to update when product changes. It is easy to knit from a rolled up ball of yarn. 

It is common to mix up maintainability and reliability. The two concepts are distinctly different. Reliability talks about the behavior of the test when no changes are made while maintainability talks about how well the test deals with changes in the product.

Dealing with infolog messages in tests can be challenging and a good example of what to test for. In most cases tests should not care about the actual messages. Suppose you provide a better message - should a test then fail?

Here is an example of an unit test is verifying that the caption is not blank - but not exactly what the caption is. 

[SysTestMethod]
void captionIsSpecified() 
{ 
    this.assertTrue(myClass.caption(), "No caption specified"); 
}

Here is an example of an unit test verifying that a particular label is used. This test will make it hard(er) to change what label is used in the product. A test shouldn't care which label is used, but that a caption is provided.

[SysTestMethod]
public void captionHasRightLabel() 
{ 
    this.assertEquals("@SYS423123", myClass.caption(), "Wrong label identifier is used"); 
}

In more general terms, the Arrange/Act/Assert pattern forces you to explicitly state what is being tested and what is being asserted. Start each test by adding the 3 comments, and then write the actual code. 

[SysTestMethod]
public void casingInvariant()
{
    InventDim inventDim, newInventDim;

    ttsbegin;

    // Arrange
    inventDim.LicensePlateId   = 'LP1';
    inventDim.InventStatusId   = 'Status1';

    newInventDim.LicensePlateId   = 'lp1';
    newInventDim.InventStatusId   = 'status1';

    //Act
    inventDim    = InventDim::findOrCreate(inventDim);
    newInventDim = InventDim::findOrCreate(newInventDim);

    //Assert
    this.assertEquals(inventDim.RecId, newInventDim.RecId, "InventDim records should match");

    ttsabort;
}

If the intent remains obvious, the groups can be collapsed.

[SysTestMethod]
public void valueNullIsEmptyString() 
{ 
    // Arrange/Act 
    str returnValue = SysQuery::value(null); 
   
    // Assert 
    this.assertEquals('', returnValue, "Null is not handled correctly"); 
}


Performance

Getting the test results in a timely manner increases the value of the testing investment. Ideally all tests can be run locally within few minutes. This way regressions can be detected before the code is shared. To achieve this goal ensure all unit tests executes in <1 second, and component tests in less than 20 seconds.

Tests can be optimized in many ways:

  • Avoid unnecessary setup.
    For example, do not setup number sequences explicitly, but use the SysTestCaseAutomaticNumberSequences attribute. Setting up number sequences explicitly quick results into many code lines being blindly copied across tests, meaning too much is being setup.  Conversely; the attribute ensures the number sequence required are setup just-in-time. Much simpler and quicker.
  • Minimize use of form adapters. 
    Form adapters are great as they allow you to navigate the UI from a test in a reliable way. However, doing so requires the entire form stack to be exercised, including security checks, initialization and population of data sources and controls.  It is a significant overhead. If you just need to press a button, your test will be much faster if you invoke that button's logic (typically a class) directly.
  • Minimize dependencies.
    Whenever you can remove a dependency that is not relevant for the test then do so.  Dependency injection is a great way to do this, when the code is supports it. Many other techniques like mocking and faking can also be used.
  • Avoid redundant tests
    It is easy to copy/paste too much logic across tests for example when implementing coverage for variations. Typically one end-to-end component test complemented by a series of unit tests targeting the variations is what you should be aiming for.  This is commonly referred to as the test pyramid.

Simplicity

Tests are easy to understand with clear intent, they are simple to run i.e. requires no special infrastructure to orchestrate execution. The tests read as code specification or documentation.

Look at this example of a too complex test:

[SysTestMethod]
public void contributionConstantPerUnitIsCorrect() {     // Arrange 
CostTmpSheetCalcResult costTmpSheetCalcResult;         costTmpSheetCalcResult.ContributionConstant = 5.00;     costTmpSheetCalcResult.Qty = 1;     costTmpSheetCalcResult.IsHeader = NoYes::Yes;     costTmpSheetCalcResult.Level  = 10;     costTmpSheetCalcResult.doInsert();     // Act/Assert     this.assertEquals(5.00, costTmpSheetCalcResult.contributionConstantPerUnit(), "The calculation of constant contribution per unit is incorrect."); }


It has several problems:

  1. Some of the setup is not needed. While it is fast, it obscures the intent of the test.
  2. It has some magic numbers - the relationships are not obvious.
  3. One of the magic numbers makes the test less likely to catch regressions.

Here is a better rewrite of the same test:

[SysTestMethod]
public void contributionConstantPerUnitIsCorrect() {  // Arrange     CostTmpSheetCalcResult costTmpSheetCalcResult;     const var contributionConstant = 5; const var qty = 2;     costTmpSheetCalcResult.ContributionConstant = contributionConstant;     costTmpSheetCalcResult.Qty = qty;     costTmpSheetCalcResult.doInsert();     // Act/Assert     this.assertEquals(qty * contributionConstant, costTmpSheetCalcResult.contributionConstantPerUnit(), "The calculation of constant contribution per unit is incorrect."); }

Preciseness

Tests fail for one reason only. It is easy to track down why the test failed.  Often test execution occurs somewhere else - and you only have access to the result. Getting crisp and concise results every time pays off. Ideally you wouldn't even have to look at the test's implementation to understand what is broken. The test method's name and the failure method should suffice.

When a test can fail for just one reason, then it only needs to verify one condition. Here is a test that verifies a factory method is returning an instance of the right type. Notice that it is verifying one condition - despite having two assert statements. 

[SysTestMethod]
public void newFromIdentifierReturnsValidInstance() 
{ 
    MeasureTypeIdentifier identifier = classStr(MeasureTypeTestable); 
    MeasureType measureType = MeasureTypeFactory::newFromIdentifier(identifier); 
     
    this.assertNotNull(measureType, "Instance not created"); 
    this.assertTrue(measureType is MeasureTypeTestable, "Incorrect instance created"); 
}

Suppose this test fails. The result could be: 

"The test newFromIdentifierReturnsValidInstance() failed. (Expect: Not null, actual: null. "Instance not created")".

Compare this with the test below. It could fail like this:

"The test testContributionAndCostMethods() failed. (Expect: 10, actual: 11. "Test 4")".

I know what I prefer.

[SysTestMethod]
void testContributionAndCostMethods() 
{ 
    CostTmpSheetCalcResult costTmpSheetCalcResult; 
 
    costTmpSheetCalcResult.clear(); 
    costTmpSheetCalcResult.NodeCode = 'Code'; 
    costTmpSheetCalcResult.NodeDescription = 'Description'; 
    costTmpSheetCalcResult.ContributionVariable = 20.00; 
    costTmpSheetCalcResult.ContributionConstant = 10.00; 
    costTmpSheetCalcResult.CostVariable = 22; 
    costTmpSheetCalcResult.CostFixed = 8; 
    costTmpSheetCalcResult.Level = 1; 
    costTmpSheetCalcResult.Qty = 2; 
    costTmpSheetCalcResult.IsHeader = NoYes::Yes; 
    costTmpSheetCalcResult.IsTotal = NoYes::Yes; 
    costTmpSheetCalcResult.doInsert(); 
 
    select firstonly costTmpSheetCalcResult; 
 
    this.assertEquals(5.00, costTmpSheetCalcResult.contributionConstantPerUnit(), "Test 1"); 
    this.assertEquals(30.00, costTmpSheetCalcResult.contributionTotal(), "Test 2"); 
    this.assertEquals(15.00, costTmpSheetCalcResult.contributionTotalPerUnit(), "Test 3"); 
    this.assertEquals(10.00, costTmpSheetCalcResult.contributionVariablePerUnit(), "Test 4"); 
    this.assertEquals(4.00, costTmpSheetCalcResult.costFixedPerUnit(), "Test 5"); 
    this.assertEquals(30.00, costTmpSheetCalcResult.costTotal(), "Test 6"); 
    this.assertEquals(15.00, costTmpSheetCalcResult.costTotalPerUnit(), "Test 7"); 
    this.assertEquals(11.00, costTmpSheetCalcResult.costVariablePerUnit(), "Test 8"); 
}


Credits

Credits are due to Dave Froslie for thought leadership in this area and the art work in this post.

THIS POST IS PROVIDED AS IS; AND CONFERS NO RIGHTS.