An introduction to mutation testing and PIT

This blog post will cover the basics of the concept of mutation testing. I have been made aware of mutation testing only recently, but I have discovered it’s a very powerful and interesting technique for:

  • analysis and improvement of unit tests
  • detection of dead code in your application

Two things that are always worth taking a look at if you ask me. I will illustrate the mutation testing concept using a tool called PIT and a simple piece of code and accompanying set of unit tests.

What is mutation testing?
From Wikipedia:

Mutation testing is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. Each mutated version is called a mutant and tests detect and reject mutants by causing the behavior of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill.

In other words, mutation testing is a technique that allows you to evaluate not only the percentage of code that is executed when running your tests (i.e., code coverage), but also the ability of your tests to detect any defects in the executed code. This makes mutation testing a very powerful and very useful technique I think anyone involved in software development and testing should at least be aware of.

Introducing PIT
I will try and illustrate the power of mutation testing using PIT, a Java mutation test tool which can be downloaded here. I chose PIT over other available mutation test tools mainly because of its ease of installation and use.

Assuming you’re also using Maven, you can configure your Java project for mutation testing using PIT by adding the following to your pom.xml:

<build>
	<plugins>
		<plugin>
			<groupId>org.pitest</groupId>
			<artifactId>pitest-maven</artifactId>
			<version>PIT-VERSION</version>
			<configuration>
				<targetClasses>
					<param>package.root.containing.classes.to.mutate*</param>
				</targetClasses>
				<targetTests>
					<param>package.root.containing.test.classes*</param>
				</targetTests>
			</configuration>
		</plugin>
	</plugins>
</build>

Simply replace the package locators with those appropriate for your project and be sure not to forget the asterisk at the end. Also replace PIT-VERSION with the PIT version you want to use (the latest is 1.1.4 at the moment of writing this blog post) and you’re good to go.

The code class and test class to be subjected to mutation testing
I created a very simple Calculator class that, you guessed it, performs simple arithmetic on integers. My calculator only does addition, subtraction and power calculations:

public class Calculator {

	int valueDisplayed;

	public Calculator() {
		this.valueDisplayed = 0;
	}
	
	public Calculator(int initialValue) {
		this.valueDisplayed = initialValue;
	}

	public void add(int x) {
		this.valueDisplayed += x;
	}
	
	public void subtract(int x) {
		this.valueDisplayed -= x;
	}
	
	public void power(int x) {
		this.valueDisplayed = (int) Math.pow(this.valueDisplayed, x);
	}

	public int getResult() {
		return this.valueDisplayed;
	}
	
	public void set(int x) {
		this.valueDisplayed = x;
	}

	public boolean setConditional(int x, boolean yesOrNo) {
		if(yesOrNo) {
			set(x);
			return true;
		} else {
			return false;
		}
	}
}

To test the calculator, I have created a couple of TestNG unit tests that call the various methods my calculator supports. Note that PIT supports both JUnit and TestNG.

public class CalculatorTest {
	
	@Test
	public void testAddition() {
		
		Calculator calculator = new Calculator();
		calculator.add(2);
		Assert.assertEquals(calculator.getResult(), 2);
	}
	
	@Test
	public void testPower() {
		
		Calculator calculator = new Calculator(2);
		calculator.power(3);
		Assert.assertEquals(calculator.getResult(), 8);
	}
	
	@Test
	public void testConditionalSetTrue() {
		
		Calculator calculator = new Calculator();
		Assert.assertEquals(calculator.setConditional(2, true), true);
	}
	
	@Test
	public void testConditionalSetFalse() {
		
		Calculator calculator = new Calculator();
		Assert.assertEquals(calculator.setConditional(3, false), false);
	}
}

To illustrate the capabilities of PIT and mutation testing in general, I ‘forgot’ to include a test for the subtract() method. Also, I created what is known as a ‘weak test’: a test that passes but doesn’t check whether all code is actually called (in this case, no check is done to see whether set() is called when calling setConditional()). Now, when we run PIT on our code and test classes using:

mvn org.pitest:pitest-maven:mutationCoverage

an HTML report is generated displaying our mutation test results:

The report as generated by PIT

When we drill down to our Calculator class we can see the modifications that have been made by PIT and the effect it had on our tests:

Class-level details of the PIT mutation test results

This clearly shows that our unit test suite has room for improvement:

  • The fact that subtract() is never called in our test suite (i.e., our code coverage can be improved) is detected
  • The fact that the call to set() can be removed from the code without our test results being affected (i.e., our tests are lacking defect detection power) is detected

These holes in our test coverage and test effectiveness might go unnoticed for a long time, especially since all tests pass when run using TestNG. This goes especially for the second flaw as a regular code coverage tool would not pick this up: the call to set() is made after all, but it does not have any effect on the outcome of our tests!

Additional PIT features
The PIT documentation discusses a lot of features that make your mutation testing efforts even more powerful. You can configure the set of mutators used to tailor the result set to your needs, you can use mutation filters to filter out any unwanted results, and much more. However, even in the default configuration, using PIT (or potentially any other mutation testing tool as listed here will tell you a lot about the quality of your unit testing efforts.

Removing dead code from your codebase based on mutation test results
Apart from evaluating the quality of your unit tests, mutation test results can also give you insight into which parts of your application code are never executed (dead code). Consider the call to the set() method in the example above. The mutation test results indicated that this call could be removed without the results of the unit test being altered. Now, in our case it is pretty obvious that this indicates a lack of coverage in our unit tests (if you want to set the Calculator value, you’d better call the set() method), but it isn’t hard to imagine a situation where such a method call can be removed without any further consequences. In this case, the results of the mutation tests will point you to potentially dead code that might be a candidate for refactoring or removal. Thanks go to Markus Schirp for pointing out this huge advantage of mutation testing to me on Twitter.

Example project
The Maven project that was used to generate the results demonstrated in this post can be downloaded here. You can simply import this project and run

mvn org.pitest:pitest-maven:mutationCoverage

to recreate my test results and review the generated report. This will serve as a good starting point for your further exploration of the power and possibilities of mutation testing.

Using the ExtentReports TestNG listener in Selenium Page Object tests

In this (long overdue) post I would like to demonstrate how to use ExtentReports as a listener for TestNG. By doing so, you can use your regular TestNG assertions and still create nicely readable ExtentReports-based HTML reports. I’ll show you how to do this using a Selenium WebDriver test that uses the Page Object pattern.

Creating the Page Objects
First, we need to create some page objects that we are going to exercise during our tests. As usual, I’ll use the Parasoft Parabank demo application for this. My tests cover the login functionality of the application, so I have created Page Objects for the login page, the error page where you land when you perform an incorrect login and the homepage (the AccountsOverviewPage) where you end up when the login action is successful. For those of you that have read my past post on Selenium and TestNG, these Page Objects have been used in that example as well. The Page Object source files are included in the Eclipse project that you can download at the end of this post.

Creating the tests
To demonstrate the reporting functionality, I have created three tests:

  • A test that performs a successful login – this one passes
  • A test that performs an unsuccessful login, where the check on the error message returns passes – this one also passes
  • A test that performs an unsuccessful login, where the check on the error message returns fails – this one fails

The tests look like this:

public class LoginTest {
	
 WebDriver driver;
     
    @BeforeSuite
    public void setUp() {
	         
        driver = new FirefoxDriver();
        driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
    }
	     
    @Parameters({"incorrectusername","incorrectpassword"})
    @Test(description="Performs an unsuccessful login and checks the resulting error message (passes)")
    public void testFailingLogin(String username, String incorrectpassword) {
	         
        LoginPage lp = new LoginPage(driver);
        ErrorPage ep = lp.incorrectLogin(username, incorrectpassword);
        Assert.assertEquals(ep.getErrorText(), "The username and password could not be verified.");
    }
	    
    @Parameters({"incorrectusername","incorrectpassword"})
    @Test(description="Performs an unsuccessful login and checks the resulting error message (fails)")
    public void failingTest(String username, String incorrectpassword) {
	         
        LoginPage lp = new LoginPage(driver);
        ErrorPage ep = lp.incorrectLogin(username, incorrectpassword);
        Assert.assertEquals(ep.getErrorText(), "This is not the error message you're looking for.");
    }
	    
    @Parameters({"correctusername","correctpassword"})
    @Test(description="Performs a successful login and checks whether the Accounts Overview page is opened")
    public void testSuccessfulLogin(String username, String incorrectpassword) {
	         
        LoginPage lp = new LoginPage(driver);
        AccountsOverviewPage aop = lp.correctLogin(username, incorrectpassword);
        Assert.assertEquals(aop.isAt(), true);
    }
	     
    @AfterSuite
    public void tearDown() {
         
        driver.quit();
    }
}

Pretty straightforward, right? I think this is a clear example of why using Page Objects and having the right Page Object methods make writing and maintaining tests a breeze.

Creating the ExtentReports TestNG listener
Next, we need to define the TestNG listener that creates the ExtentReports reports during test execution:

public class ExtentReporterNG implements IReporter {
    private ExtentReports extent;
 
    @Override
    public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> suites, String outputDirectory) {
        extent = new ExtentReports(outputDirectory + File.separator + "ExtentReportTestNG.html", true);
 
        for (ISuite suite : suites) {
            Map<String, ISuiteResult> result = suite.getResults();
 
            for (ISuiteResult r : result.values()) {
                ITestContext context = r.getTestContext();
 
                buildTestNodes(context.getPassedTests(), LogStatus.PASS);
                buildTestNodes(context.getFailedTests(), LogStatus.FAIL);
                buildTestNodes(context.getSkippedTests(), LogStatus.SKIP);
            }
        }
 
        extent.flush();
        extent.close();
    }
 
    private void buildTestNodes(IResultMap tests, LogStatus status) {
        ExtentTest test;
 
        if (tests.size() > 0) {
            for (ITestResult result : tests.getAllResults()) {
                test = extent.startTest(result.getMethod().getMethodName());
 
                test.getTest().startedTime = getTime(result.getStartMillis());
                test.getTest().endedTime = getTime(result.getEndMillis());
 
                for (String group : result.getMethod().getGroups())
                    test.assignCategory(group);
 
                String message = "Test " + status.toString().toLowerCase() + "ed";
 
                if (result.getThrowable() != null)
                    message = result.getThrowable().getMessage();
 
                test.log(status, message);
 
                extent.endTest(test);
            }
        }
    }
 
    private Date getTime(long millis) {
        Calendar calendar = Calendar.getInstance();
        calendar.setTimeInMillis(millis);
        return calendar.getTime();        
    }
}

This listener will create an ExtentReports report called ExtentReportTestNG.html in the default TestNG output folder test-output. This report lists all passed tests, then all failed tests and finally all tests that were skipped during execution. There’s no need to add specific ExtentReports log statements to your tests.

Running the test and reviewing the results
To run the tests, we need to define a testng.xml file that enables the listener we just created and runs all tests we want to run:

<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >

<suite name="Parabank login test suite" verbose="1">
	<listeners>
		<listener class-name="com.ontestautomation.extentreports.listener.ExtentReporterNG" />
	</listeners>
	<parameter name="correctusername" value="john" />
	<parameter name="correctpassword" value="demo" />
	<parameter name="incorrectusername" value="wrong" />
	<parameter name="incorrectpassword" value="credentials" />
	<test name="Login tests">
		<packages>
			<package name="com.ontestautomation.extentreports.tests" />
		</packages>
	</test>
</suite>

When we run our test suite using this testng.xml file, all tests in the com.ontestautomation.extentreports.tests package are run and an ExtentReports HTML report is created in the default test-output folder. The resulting report can be seen here.

More examples
More examples on how to use the ExtentReports listener for TestNG can be found on the ExtentReports website.

The Eclipse project I have created to demonstrate the above can be downloaded here.

Testing web services using in-memory web containers

Last week, I read an interesting blog post on how to test Java web services inside an in-memory web container. I thought it was a fast and elegant solution to overcome a well-known unit testing problem: unit tests for web services become cumbersome when your web services uses the services provided by the container and you need to mock these container services. On the other hand, if you don’t mock these, your unit tests are virtually worthless. The blog post, originally written by Antonio Goncalves, provides an elegant solution to this problem: use an in-memory container. In this case, he uses Sun’s implementation of Java SE 6, which includes a light-weight HTTP server API and implementation: com.sun.net.httpserver.

In this post, I will show a slightly improved version of his solution using my own web service implementation. In order to get the example below to work, all I needed to do was include a recent version of the javax.jws library, which you can find for example here.

I have constructed a simple service that, given a composer, returns his name, his best known work, his country of birth and his age of death. The interface for this service looks like this:
Continue reading