Running your tests in a specific order

General consensus within the test automation community is that your automated tests should be able to run independently. That is, tests should be runnable in any given order and the result of a test should not depend on the outcome of one or more previous tests. This is generally a good practice and one I try and adhere to as much as possible. Nothing worse than a complete test run being wasted because some initial test data setup actions failed, causing all other tests to fail as well. Especially when you’re talking about end-to-end user interface tests that only run overnight because they’re taking so awfully long to finish.

However, in some cases, having your tests run in a specific order can be the most pragmatic or even the only (practical) option. For example, in my current project I am automating of a number of regression tests that require specific test data to be present before the actual checks can be performed. And since:

  • I don’t want to rely on test data that’s already being present in the database
  • Restoring a database snapshot is not a feasible option (at least not at the moment)
  • Creation of suitable test data takes at least a minute but closer to two for most tests. This has to be done through the user interface since a lot of data is stored in blobs, making SQL updates a challenging strategy to say the least..

the only viable option is to make the creation of test data the first step in a test run. Creating the necessary test data before each and every individual test would just take too long.

Since we’re using SpecFlow, we’re creating test data in the first scenario in every feature. All other scenarios in the feature rely on this test data. This means that the test data creation scenario needs to be run first, otherwise the subsequent scenarios will fail. Using Background is not an option, because those steps are run before each individual scenario, whereas we want to run the test data creation steps once every feature.

The above situation is just one example of a case where being able to control the execution order of your tests can come in very useful. Luckily, most testing frameworks support this in one or more ways. In the remainder of this post, we’re going to have a look at how you can define test execution order in JUnit, TestNG and NUnit.

JUnit
Before version 4.11, JUnit did not support controlling the test execution order. However, newer versions of the test framework allow you to annotate your test classes using @FixMethodOrder, which enables you to select one of various MethodSorters. For example, the tests in this class are run in ascending alphabetical order, sorted by test method name:

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class JUnitOrderedTests {
	
	@Test
	public void thirdMethod() {		
	}
	
	@Test
	public void secondMethod() {		
	}
	
	@Test
	public void firstMethod() {		
	}
}

Running these tests show that they are executed in the specified order:
JUnit FixMethodOrder result

TestNG
TestNG offers no less than three ways to order your tests:

Using preserve-order in the testng.xml file
You can use the preserve-order attribute in the testng.xml file (where you specify which tests will be run) to have TestNG run the tests in the order they appear in the XML file:

<test name="OrderedTestNGTests" preserve-order="true">
	<classes>
		<class name="TestNGTestClass">
			<methods>
				<include name="testOne" />
				<include name="testTwo" />
			</methods>
		</class>
	</classes>
</test>

Using the priority attribute
You can also use the priority attribute in your @Test annotation to prioritize your test methods and determine the order in which they are run:

public class TestNGPrioritized {
	
	@Test(priority = 3)
	public void testThree() {		
	}
	
	@Test(priority = 1)
	public void testOne() {		
	}
	
	@Test(priority = 2)
	public void testTwo() {		
	}	
}

Using dependencies
In TestNG, you can have tests and test suites depend on other tests / test suites. This also implicitly defines the order in which the tests are executed: when test A depends on test B, test B will automatically be run before test A. These dependencies can be defined in code:

public class TestNGOrderedTests {
	
	@Test(dependsOnMethods = {"parentTest"})
	public void childTest() {		
	}
	
	@Test
	public void parentTest() {		
	}
}

This works on method level (using dependsOnMethods) as well as on group level (using dependsOnGroups). Alternatively, you can define dependencies on group level in the testng.xml file:

<test name="TestNGOrderedTests">
	<groups>
		<dependencies>
			<group name="parenttests" />
			<group name="childtests" depends-on="parenttests" />
		</dependencies>
	</groups>
</test>

NUnit
NUnit (the .NET version of the xUnit test frameworks) does not offer an explicit way to order tests, but you can control the order in which they are executed by naming your methods appropriately. Tests are run in alphabetical order, based on their method name. Note that this is an undocumented feature that may be altered or removed at will, but it still works in NUnit 3, which was recently released, and I happily abuse it in my current project..

At the beginning of this post, I mentioned that in my current project, we use SpecFlow to specify our regression tests. We then execute our SpecFlow scenarios using NUnit as a test runner, so we can leverage this alphabetical test order ‘trick’ by naming our SpecFlow scenarios alphabetically inside a specific feature. This gives us a way to control the order in which our scenarios are executed:

Scenario: 01 Create test data
	Given ...
	When ...
	Then ...
	
Scenario: 02 Modify data
	Given ...
	When ...
	Then ...

Scenario: 03 Remove modified data
	Given ...
	When ...
	Then ...

Again, it is always best to create your tests in such a way that they can be run independently. However, sometimes this just isn’t possible or practical. In those cases, you can employ one of the strategies listed in this post to control your test order execution.

An introduction to property-based testing with JUnit-Quickcheck

Even though I consider having an extensive set of unit and unit integration tests a good thing in almost any case, there is still a fundamental shortcoming with automated checks that I feel should be addressed more often. Almost all checks I come across are very much example-based, meaning that only a single combination of input values is being checked. This goes for unit-, API-level as well as UI-driven tests, by the way. Of course, this is already far better than not checking anything at all, but how do you make sure that your check passes (and thus your application works for all, or at least for a considerable subset of all possible input parameter combinations?

In most cases, it is computationally impossible to perform a check for every combination of all possible input values (have you ever tried to work out what all possible values for a single string input parameter are?), so we need an approach that is able to generate a lot of (preferably random) input values that satisfy a set of predicates, and subsequently verify whether the check holds for all these input values and value combinations. Enter property-based testing (see this post on Jessica Kerr’s blog for an introduction to the concept).

What is property based testing?
In short, property-based testing pretty much exactly addresses the problem described above: based on an existing check, randomly generate a lot of input parameter combinations that satisfy predefined properties and see if the check passes for each of these combinations. This includes using negative and empty parameter values and other edge cases, all to try and break the system (or prove its robustness, if you want to look at it from the positive side…). You can do this either manually – although this does constitute a lot of effort – or, and given the nature of this blog this is of course the preferred method, use a tool to do the menial work for you.

Why should you use property based testing?
Again, the answer is in the first paragraph: because property-based testing will give you far more information about the functional correctness and the robustness of your software than executing mere example-based checks. Especially when combined with mutation testing, property-based testing you will likely have you end up with a far more powerful and effective unit- and unit integration test suite.

Tool: JUnit-Quickcheck
So, what tools are available for property-based testing? The archetypical tool is QuickCheck for the Haskell language. Most other property-based testing tools are somehow derived from QuickCheck. This is also the case for JUnit-Quickcheck, the tool that will be used in the rest of this blog post. As the name suggests, it is a tool for the Java language, based on JUnit.

Our system under test: the Calculator class
For the examples in this blog post, we will use the same Calculator class that was used when we talked about mutation testing. In particular, we are going to perform property-based testing on the add() method of this simple calculator:

public class Calculator {

	int valueDisplayed;

	public Calculator() {
		this.valueDisplayed = 0;
	}

	public void add(int x) {
		this.valueDisplayed += x;
	}

	public int getResult() {
		return this.valueDisplayed;
	}
}

An example-based unit test for the add() method of this admittedly very basic calculator could look like this:

@Test
public void testAdditionExampleBased() {
		
	Calculator calculator = new Calculator();
	calculator.add(2);
	assertEquals(calculator.getResult(), 2);		
}

Using JUnit-Quickcheck, we can replace this example-based unit test to a property-based test as follows:

@Property(trials = 5)
public void testAddition(int number) {
		
	System.out.println("Generated number for testAddition: " + number);
		
	Calculator calculator = new Calculator();
	calculator.add(number);
	assertEquals(calculator.getResult(), number);
}

The @Property annotation defines this test as a property-based test, while the @RunWith class-level annotation defines that these tests are to be run using JUnit-Quickcheck. Note that you can mix example-based and property-based tests in the same test class. As long as you run your test class using JUnit, all public void no-parameter methods annotated with @Test will be run just as plain JUnit would do, while all tests annotated with @Property will be run as property-based tests.

The trials = 5 attribute tells JUnit-Quickcheck to generate 5 random parameter values (also known as samples). Default is 100. The System.out.println call is used to write the generated parameter value to the console, purely for demonstrational purposes:

Output of basic JUnit-Quickcheck property

As you can see, JUnit-Quickcheck randomly generated integer values and performed the check using each of these values. The property passes, telling us that our add() method seems to work quite well, even with quite large or negative integers. This is information you wouldn’t get when using purely example-based tests. Sweet!

Constraining property values
Purely random integers seem to work well for our simple add() method, but obviously there are cases where you want to define some sort of constraints on the values generated by your property-based testing tool. JUnit-Quickcheck provides a number of options to do so:

1. Using the JUnit Assume class
Using Assume, you can define assumptions on the values generated by JUnit-Quickcheck. For example, if you only want to test your add() method using positive integers, you could do this:

@Property(trials = 5)
public void testAdditionUsingAssume(int number) {
		
	assumeThat(number, greaterThan(0));
		
	System.out.println("Generated number for testAdditionUsingAssume: " + number);
		
	Calculator calculator = new Calculator();
	calculator.add(number);
	assertEquals(calculator.getResult(), number);
}

When you run these tests, you can see that values generated by JUnit-Quickcheck that do not satisfy the assumption (in this case, three of them) are simply discarded:

Output of JUnit-Quickcheck using Assume

2. Using the @InRange annotation With this annotation, you can actually constrain the values that are generated by JUnit-Quickcheck:

@Property(trials = 5)
public void testAdditionUsingInRange(@InRange(minInt = 0) int number) {
		
	System.out.println("Generated number for testAdditionUsingInRange: " + number);
		
	Calculator calculator = new Calculator();
	calculator.add(number);
	assertEquals(calculator.getResult(), number);
}

Contrary to the approach that uses the Assume class, where values generated by JUnit-Quickcheck are filtered post-generation, when you’re using the @InRange approach you will always end up with the required number of samples:

Output of JUnit-Quickcheck using InRange

3. Using constraint expressions
Here, just like when you’re using the Assume approach, values generated by JUnit-Quickcheck are filtered after generation by using a satisfies predicate:

@Property(trials = 5)
public void testAdditionUsingSatisfies(@When(satisfies = "#_ >= 0") int number) {
				
	System.out.println("Generated number for testAdditionUsingSatisfies: " + number);
		
	Calculator calculator = new Calculator();
	calculator.add(number);
	assertEquals(calculator.getResult(), number);
}

The difference is that when the discard ratio (the percentage of generated values that do not satisfy the constraints defined) exceeds a certain threshold (0.5 by default), the property fails:

Output of JUnit-Quickcheck using the satisfies predicate

Error generated by JUnit-Quickcheck when discard threshold is not met

Depending on your preferences and project requirements, you can select any of these strategies for constraining your property values.

Additional JUnit-Quickcheck features
JUnit-Quickcheck comes with some other useful features as well:

  • Fixing the seed used to generate the random property values. You can use this to have JUnit-Quickcheck generate the same values for each and every test run. You may want to use this feature when a property fails, so that you can test the property over and over again with the same set of generated values that caused the failure in the first place.
  • When a property fails for a given set of values, JUnit-Quickcheck will attempt to find smaller sets of values that also fail the property, a technique called shrinking. See the JUnit-Quickcheck documentation for an example.

Download an example project
You can download a Maven project containing the Calculator class and all JUnit-Quickcheck tests that have been demonstrated in this post here. Happy property-based testing!

Running Selenium JUnit tests from Jenkins

In this post I want to show you how to use Jenkins to automatically execute Selenium tests written in JUnit format, and how the results from these tests can be directly reported back to Jenkins. To achieve this, we need to complete the following steps:

  • Write some Selenium tests in JUnit format that we want to execute
  • Create a build file that runs these tests and writes the reports to disk
  • Set up a Jenkins job that runs these tests and interprets the results

Note: First of all a point of attention: I couldn’t get this to work while Jenkins was installed as a Windows service. This has something to do with Jenkins opening browser windows and subsequently not having suitable permissions to access sites and handle Selenium calls. I solved this by starting Jenkins ‘by hand’ by downloading the .war file from the Jenkins site and running it using java -jar jenkins.war

Creating Selenium tests to run
First, we need to have some tests that we would like to run. I’ve created three short tests in JUnit-format, where one has an intentional error for demonstration purposes – it’s good practice to see if any test defects actually show up in Jenkins! Using the JUnit-format implies that tests can be run independently, so there can’t be any dependencies between tests. My test class looks like this (I’ve removed two tests and all import statements for brevity):

package com.ontestautomation.selenium.ci;

public class SeleniumCITest {
	
	static WebDriver driver;
	
	@Before
	public void setup() {
		
		driver = new FirefoxDriver();
		driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);				
	}
	
	@Test
	public void successfulLoginLogout() {
		
		driver.get("http://parabank.parasoft.com");
		Assert.assertEquals(driver.getTitle(), "ParaBank | Welcome | Online Banking");
		driver.findElement(By.name("username")).sendKeys("john");
		driver.findElement(By.name("password")).sendKeys("demo");
		driver.findElement(By.cssSelector("input[value='Log In']")).click();
		Assert.assertEquals(driver.getTitle(), "ParaBank | Accounts Overview");
		driver.findElement(By.linkText("Log Out")).click();
		Assert.assertEquals(driver.getTitle(), "ParaBank | Welcome | Online Banking");
	}
	
	@After
	public void teardown() {
		driver.quit();
	}	
}

Pretty straightforward, but good enough.

Creating a build file to run tests automatically
Now to create a build file to run our tests automatically. I used Ant for this, but Maven should work as well. I had Eclipse generate an Ant build-file for me, then changed it to allow Jenkins to run the tests as well. In my case, I only needed to change the location of the imports (the Selenium and the JUnit .jar files) to a location where Jenkins could find them:

<path id="seleniumCI.classpath">
    <pathelement location="bin"/>
    <pathelement location="C:/libs/selenium-server-standalone-2.44.0.jar"/>
    <pathelement location="C:/libs/junit-4.11.jar"/>
</path>

Note that I ran my tests on my own system, so in this case it’s OK to use absolute paths to the .jar files, but it’s by no means good practice to do so! It’s better to use paths relative to your Jenkins workspace, so tests and projects are transferable and can be run on any system without having to change the build.xml.

Actual test execution is a matter of using the junit and junitreport tasks:

<target name="SeleniumCITest">
    <mkdir dir="${junit.output.dir}"/>
    <junit fork="yes" printsummary="withOutAndErr">
        <formatter type="xml"/>
        <test name="com.ontestautomation.selenium.ci.SeleniumCITest" todir="${junit.output.dir}"/>
        <classpath refid="seleniumCI.classpath"/>
        <bootclasspath>
            <path refid="run.SeleniumCITest (1).bootclasspath"/>
        </bootclasspath>
    </junit>
</target>
<target name="junitreport">
    <junitreport todir="${junit.output.dir}">
        <fileset dir="${junit.output.dir}">
            <include name="TEST-*.xml"/>
        </fileset>
        <report format="frames" todir="${junit.output.dir}"/>
    </junitreport>
</target>

This is generated automatically when you create your build.xml using Eclipse, by the way.

Running your tests through Jenkins
The final step is setting up a Jenkins job that simply calls the correct Ant target in a build step:
Ant build step
After tests have been run, Jenkins should pick up the JUnit test results from the folder specified in the junitreport task in the Ant build.xml:
JUnit report post build action
If everything is set up correctly, you should now be able to run your tests through Jenkins and have the results displayed:
Build result in Jenkins
You can also view details on the individual test results by clicking on the error message:
Error details in Jenkins

The Eclipse project I have used for this example can be downloaded here.