Much of the upgrade series so far has been about “knowing your environment”. As I get closer to the end of the overall series, there are a couple more topics to discuss. Today’s topic is testing and what to consider/plan for.
Posts in the series
- Series intro
- Process overview
- Know your environment (part 1)
- Know your environment (part 2)
- Know your environment (part 3)
- Know your environment (part 4)
- Know your environment (part 5)
- Know your environment (wrap-up)
Types of testing
When I’m planning an upgrade, I think about testing in a few different ways depending on what I am testing as there are different considerations to plan for.
Data Validation
This is the first and usually the most important as it is a validation that the core data upgrade went well. This type of testing involves comparing reports before and after the GP upgrade from a representative sample of your modules. At a live upgrade, typically the process is running various reports right before you pass off to the technical folks who do the upgrade itself, and then the first thing you do after the environment is ready again is to re-check the same reports.
I rarely see issues here and when I do, it’s often been a case where a user kept using GP after the “cutoff” before the data was copied to get started OR started using GP before validation was done. Personally, I have never seen an issue with the actual upgrade failing; however, it’s a step I never recommend skipping. The key is check a sample of reports across all or most of your modules, don’t just check that the GL data upgraded!
For test upgrades, the larger challenge is one of timing with respect to when the backup is taken and “pre” reports. I typically take a set of backups to use for the test upgrade and then restore the same backups to test companies. Users can then print “pre” reports from the test companies and “post” reports from the upgrade test companies to compare. If that’s not an option, then the same kind of cutoff planning may need to occur for the test upgrade in terms of stopping access to the system, run ‘before’ reports, then take backups for the upgrade, and resume using the system in production as normal.
In general my recommendation is to use out of the box reports for this validation. This is done first, before other testing starts. If there are some issues with a modified report, it should not prevent you from comparing the “original” report for data validation purposes. The content is what you are validating, not the “look” of the report itself. You don’t want to be in a situation where you can’t complete data validation because your external reporting tool isn’t ready to use yet, or you need to correct an issue with a modified report first.
Process testing
This is typically the area where the most testing time occurs and what most people refer to or think of as “testing” the upgrade. This is where users will go through and test many of the same functions and processes as they perform both day to day and “routine-based” processes like month end or year end related routines. The closer the users can get to doing the same things as normal, the more likely it is that you will have caught any issues that need to be reviewed prior to go live.
I encourage users initially to post and print the posting journals (to PDF or paper) and check them. The reason I suggest this is often some of the modified reports in your environment are posting journals and edit lists and reviewing the content and looking for errors in the reports themselves is necessary. Once the reports are deemed to be ok, they may not need to be printed for every transaction or batch posted after that.
Other types of testing
The rest are offshoots of some of the things I’ve mentioned in my Know Your Environment parts of this series: modified reports and forms testing, reporting tools testing, integration testing, customization testing, external application testing.
Each of those areas has different things to consider and while the testing of those can be integrated into the Process testing, there are often tests specific to those items outside of that. “Does the tool work?”, “Can I get into the tool?” etc.
An example I use to explain this is reporting tool testing. “Can I connect to the data source?” is something you need to have tested before you give a user the instructions to test the process that requires that report as part of the output.
The other part of this section can be a simple “inventory” test: make sure that all of the ISV products that you are licensed for and use are installed. It wouldn’t be the first time that a product wasn’t installed and depending on what it is, it may not be noticed immediately.
Test Plans
Make sure part of your planning includes creating test plans or at least identifying what you want to test. Testing is useless if you don’t know what the expected outcome is or if users are just “winging it” without a plan.
Testing is a chance to ensure the odd situations work as well as the “vanilla” ones, i.e., the simple scenarios are not the only thing you should be testing. For example, if you use multicurrency, include M/C transactions in your testing.
The point of testing
The goal of a test upgrade is to ensure the process works, end to end. The entire process is a test right from the way in which you are installing the new version through to making sure that your test transactions post correctly. I tell clients you are not doing “QA” (quality assurance) that Dynamics GP works, you’re testing that you are able to do the things you do day to day, and noting things that are new/different/broken so you can address them before go live. Or, in some cases, know what to expect at go live.
I’m not a big fan of “post 3 of every type of transaction” type of test plans. I would rather see a focus on process-based testing, “order to cash”, “procure to pay” etc. as it mimics how you use GP day to day. Take transactions all the way through the lifecycle - create a PO, receive the PO, invoice match the PO, return the items etc. and in between, check the reporting you might be doing daily to see how those are flowing.
One thing to note here is I am not suggesting that you “run parallel” with production during a test upgrade. While you can take some things you do in production and also do them in test, trying to keep the systems in sync is, in my opinion, a mistake. Any time organizations have done this, they spend more time double-entering and reconciling why the systems are not identical than its worth.
Summary
This is a very brief overview of testing considerations you might think about in planning your next upgrade. Finding issues during testing is OK, think of them as yet another reason why you’re doing a test upgrade. Would you rather find out something isn’t working after go live, or during the test phase when you have more time to sort things out?
*This post is locked for comments