Best practice: focus on repeatability of your automated tests

This post was published on November 22, 2013

This is the first installment in a series of posts on test automation best practices. Notwithstanding the rapid growth and evolution of the test automation field, a number of best practices can be identified that stand the test of time. Adhering to these best practices will improve the added value of your automated tests, no matter the scale or scope of the test, or the technology or tools that are used to design, implement or execute automated tests.

In order for your automated test suites to be truly efficient, they should be set up with repeatability in mind. This means that your tests should execute with the click of a button, or by entering a command in the command prompt, again and again, without the need for manual intervention during or in between test runs. They should also yield the same (or comparable) test results every single time. Except when the system under test changes or fails, of course.

To achieve or improve repeatability, you need to pay attention to a number of things during the implementation of your automated tests. I will address some of these in this article. There are probably lots of other aspects to be considered, but these stand out for me.

One disclaimer: the repeatability factor does not apply (or applies to a far lesser extent) to projects where there’s just a single test run to be executed, for instance after a conversion or a migration project. If you’re involved in such a project, it’s probably not worth it to put extra effort in achieving repeatability of your automated tests.

Start small
As with every software development project, it is best to start small when implementing automated tests. Automate one or two test cases, or even just one or two steps of the process to be automated, and execute them over and over again to make sure they are stable and repeatable. Once your small test cases are proven to be repeatable you can build on them to create larger test suites. Make sure you prove that every major change you make to your tests does not compromise the repeatability of your tests.

Watch your test data
An important issue when designing and implementing repeatable automated tests is the use of test data. Scripts that alter or consume test data need some extra attention with regards to repeatability. A test data object, such as a customer, an order, etc., used in a certain test run may be altered or removed during that run, rendering it unsuitable or unavailable for subsequent test runs.

Roughly speaking, there are three possible approaches for dealing with test data that is altered during a test run:

  • Create the test data during the test run. For example, if your test script covers the processing of an order, have your script create a new order before processing it to make sure there’s always an order to be processed
  • Reset the test data to its original state after the test run. For example, if your test script covers changing a customer’s address to a foreign location, reset it to its original value after the test script has been executed (through whichever interface available).
  • Select the test data to be used at the start of your test run. Rather than using previously defined sets of test data, have your script perform a query on the available test data set to select a test data object to use in a particular test run. Make sure that your script can handle occasions where there’s no suitable test data object available.

Be ready for continuous integration
With the current trend of test and development teams working closer together in increasingly shorter development and test cycles (think Agile / Scrum and DevOps), continuous integration (CI) is applied not only for development tasks, but for system and integration testing as well. In order to be able to keep up with the development team, automated tests should seamlessly integrate with the continuous integration platform in use. Most open source and COTS automated test tools provide a command line interface to execute test runs and export and distribute test result reports. This doesn’t make your test scripts automagically suited for use in a CI environment. Only test scripts or frameworks that can be run again and again without the need for manual intervention can be successfully integrated in the CI process, so make sure yours fit the bill!

A schematic representation of continuous integration

A schematic representation of continuous integration

Effects of repeatability on the acceptance and the ROI of test automation
Once you have managed to control the repeatability of your automated test scripts, you should see some pretty positive results with regards to the acceptance of automated testing and the ROI associated with the test automation project:

  • Repeatable tests can be run on demand, as often as required, leading to a dramatic reduction of the cost per test run and the time needed to complete a development/test cycle
  • Automated tests that can run unattended and that can be repeated on demand will appeal to everybody from developers to upper management, increasing its perceived value and ultimately also increasing the trust in the product delivered by your team.

Are your tests as repeatable as they can be? Let me know how you achieved your degree of repeatability and the issues you had to overcome!

"