First steps into testing in a Continuous Delivery setting

For the first time in my 10 years in the testing business I am working on a project where they do Continuous Delivery (CD). This gives me an opportunity to learn as well as a lot to think about, especially when it comes to the role of testing and test automation in CD.

A lot has been written on the concept and the pros and cons of CD itself already, and since I’m clearly not an expert on the topic – although I did deliver a presentation at the inaugural Continuous Delivery Conference in the Netherlands – I won’t try and rehash information or opinions that have been presented on other blogs and in other articles before. If you want a good introduction to CD, I advise you to read this book by Jez Humble and Dave Farley instead.

Continuous Delivery by Jez Humble and Dave Farley

However, there is a topic very closely related to CD that I feel does warrant a closer look, and that’s the creation of an optimal test strategy for projects and organizations that use CD. And by ‘optimal test strategy’ I really mean

How can we make sure that testing activities in CD neither negatively affect speed of delivery nor neglect potentially poor quality of the product delivered?

Or in still other words: how can we make sure that we deliver high quality products fast?

Let’s take a quick look at what I think is the role of testing activities in the CD process.

Testing in the CD process
Of course, the ulterior motive behind testing in CD isn’t any different from testing in any other software development and delivery approach: testing is done to be able to make informed statements about the quality of a product, in this case probably a piece of software. However, with CD, testers typically have a lot less time to test new functionality (let alone perform regression testing) than in typical waterfall or Agile software development processes where working software is released once every few weeks. In CD, changes made to the software are typically pushed through the CD pipeline and ultimately deployed into the production environment in a matter of minutes. As a consequence:

  • A lot of trust needs to be placed in the automated checks that are part of the CD pipeline
  • Manual testing needs to be done outside the CD pipeline. This may or may not be done in the production environment

Test automation in CD
Because of the required speed of delivery, a lot of tests (or checks, really) should be automated. These automated checks should cover enough ground so that the team delivering the software is confident enough that the software does what it’s supposed to do once the CD pipeline is completed and the software is in the production environment.

Of course, not all tests can be automated. There’s still a need for testers to really explore and test (as opposed to check) the application. But this can only be done after the CD pipeline has been completed, i.e., testing can only be done on software that is already in production. The way this is solved in my current project is by putting the software behind a feature flag, meaning the software IS installed in the production environment, but only available once you set a feature flag (implemented as a cookie in our case as we’re dealing with a web application). This allows testers to test the software safely, without end users being impacted by any bugs that escaped the automated checks in the CD pipeline. Once all tests are completed, the feature flag is removed and the software is really in production. Another approach (used by Facebook, for example) is to make new features available to a small group of users first, wait for their feedback and then gradually roll out the new feature to a wider audience.

Automated acceptance testing in Continuous Delivery

An integrated test strategy
So, what should testing in a CD world look like? If you ask me, it all comes down to a proper integrated test strategy. That is, both developers and testers should know who tests what, so that there are no gaping holes in the overall test coverage (to prevent quality leakage) and no double work is done (to prevent unnecessary wasting of time). Only once you have clear who tests what and when this is done in the CD process, then you can decide whether or not to automate the accompanying tests and on what level (unit, integration, end-to-end).

Creating a well-functioning integrated test strategy requires adaptation from both testers and developers:

  • Developers should become even more aware of the importance of testing and start to looking beyond plain happy-flow testing in their unit and integration tests. This removes a lot of potential defects that are otherwise detected later in the process, if at all.. If you’re a developer, please do read this article on SimpleProgrammer to see what I’m trying to get at.
  • Testers should start getting closer to developers, partly to better understand what they are doing, partly to help them with refining and improving their testing skills. This might require you to get comfortable reading and reviewing code, and if you’re into test automation or willing to become so, to start learning to write some code yourself.

Further reading
If after reading this blog post you want another take on the role of testing and testers in a CD process, I highly recommend reading this article from the SD Times.

Running Selenium JUnit tests from Jenkins

In this post I want to show you how to use Jenkins to automatically execute Selenium tests written in JUnit format, and how the results from these tests can be directly reported back to Jenkins. To achieve this, we need to complete the following steps:

  • Write some Selenium tests in JUnit format that we want to execute
  • Create a build file that runs these tests and writes the reports to disk
  • Set up a Jenkins job that runs these tests and interprets the results

Note: First of all a point of attention: I couldn’t get this to work while Jenkins was installed as a Windows service. This has something to do with Jenkins opening browser windows and subsequently not having suitable permissions to access sites and handle Selenium calls. I solved this by starting Jenkins ‘by hand’ by downloading the .war file from the Jenkins site and running it using java -jar jenkins.war

Creating Selenium tests to run
First, we need to have some tests that we would like to run. I’ve created three short tests in JUnit-format, where one has an intentional error for demonstration purposes – it’s good practice to see if any test defects actually show up in Jenkins! Using the JUnit-format implies that tests can be run independently, so there can’t be any dependencies between tests. My test class looks like this (I’ve removed two tests and all import statements for brevity):

package com.ontestautomation.selenium.ci;

public class SeleniumCITest {
	
	static WebDriver driver;
	
	@Before
	public void setup() {
		
		driver = new FirefoxDriver();
		driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);				
	}
	
	@Test
	public void successfulLoginLogout() {
		
		driver.get("http://parabank.parasoft.com");
		Assert.assertEquals(driver.getTitle(), "ParaBank | Welcome | Online Banking");
		driver.findElement(By.name("username")).sendKeys("john");
		driver.findElement(By.name("password")).sendKeys("demo");
		driver.findElement(By.cssSelector("input[value='Log In']")).click();
		Assert.assertEquals(driver.getTitle(), "ParaBank | Accounts Overview");
		driver.findElement(By.linkText("Log Out")).click();
		Assert.assertEquals(driver.getTitle(), "ParaBank | Welcome | Online Banking");
	}
	
	@After
	public void teardown() {
		driver.quit();
	}	
}

Pretty straightforward, but good enough.

Creating a build file to run tests automatically
Now to create a build file to run our tests automatically. I used Ant for this, but Maven should work as well. I had Eclipse generate an Ant build-file for me, then changed it to allow Jenkins to run the tests as well. In my case, I only needed to change the location of the imports (the Selenium and the JUnit .jar files) to a location where Jenkins could find them:

<path id="seleniumCI.classpath">
    <pathelement location="bin"/>
    <pathelement location="C:/libs/selenium-server-standalone-2.44.0.jar"/>
    <pathelement location="C:/libs/junit-4.11.jar"/>
</path>

Note that I ran my tests on my own system, so in this case it’s OK to use absolute paths to the .jar files, but it’s by no means good practice to do so! It’s better to use paths relative to your Jenkins workspace, so tests and projects are transferable and can be run on any system without having to change the build.xml.

Actual test execution is a matter of using the junit and junitreport tasks:

<target name="SeleniumCITest">
    <mkdir dir="${junit.output.dir}"/>
    <junit fork="yes" printsummary="withOutAndErr">
        <formatter type="xml"/>
        <test name="com.ontestautomation.selenium.ci.SeleniumCITest" todir="${junit.output.dir}"/>
        <classpath refid="seleniumCI.classpath"/>
        <bootclasspath>
            <path refid="run.SeleniumCITest (1).bootclasspath"/>
        </bootclasspath>
    </junit>
</target>
<target name="junitreport">
    <junitreport todir="${junit.output.dir}">
        <fileset dir="${junit.output.dir}">
            <include name="TEST-*.xml"/>
        </fileset>
        <report format="frames" todir="${junit.output.dir}"/>
    </junitreport>
</target>

This is generated automatically when you create your build.xml using Eclipse, by the way.

Running your tests through Jenkins
The final step is setting up a Jenkins job that simply calls the correct Ant target in a build step:
Ant build step
After tests have been run, Jenkins should pick up the JUnit test results from the folder specified in the junitreport task in the Ant build.xml:
JUnit report post build action
If everything is set up correctly, you should now be able to run your tests through Jenkins and have the results displayed:
Build result in Jenkins
You can also view details on the individual test results by clicking on the error message:
Error details in Jenkins

The Eclipse project I have used for this example can be downloaded here.