Let your test automation talk to you

Most of my current projects involve me paving the automation way by discussing requirements and needs with stakeholders, deciding on an approach and the tools used, setting up a solution and some initial tests and then instructing others and handing over the project. I love doing this type of projects, as it allows me (who’s bored quite easily and quickly) to move from project to project and from client to client regularly (sometimes once every couple of weeks). One of the most important factors in facilitating an easy handover of ‘my’ (it isn’t really mine, of course) automation solution to those who are going to continue and expand on it is by making the automation setup as self-explanatory as possible.

This is especially so when those that are going to take over don’t have as much experience with the tools and architecture that has been decided upon (or, in some cases, that I’ve decided to use). Let’s take a look at two ways to make it easy to hand over automation (results) to your successor (stakeholder), or, as I’d like to call it, to have your automation talk to you about what it’s doing and what the results of executing the automated tests are. Not literally talk to you (though that would be nice!), but you know what I mean..

Through the code
One of the most effective ways of making your automation solution easily explainable to others is through the code. Not only should your code be well structured and maintainable, good code requires little to no comments (who’s got time or motivation to write comments anyway?) because it is self-explanatory. Some prime examples of self describing code that I’ve used or seen are:

Having your Page Object methods in Selenium return Page Objects.
When you write your Page Object Methods like this:

public LoginPage SetEmailAddressTo(string emailAddress)
    PElements.SendKeys(_driver, textfieldEmailAddress, emailAddress);
    return this;

public LoginPage SetPasswordTo(string password)
    PElements.SendKeys(_driver, textfieldPassword, password);
    return this;

public void ClickLoginButton()
    PElements.Click(_driver, buttonLogin);

you can write your tests (or your step definition implementations, if you’re using Cucumber or SpecFlow) like this:

[When(@"he logs in using his credentials")]
public void WhenHeLogsInUsingHisCredentials()
    new LoginPage().

It’s instantly clear what your tests is doing here, all by means of allowing method chaining through well-chosen method return types and expressive method names.

Choosing to work with libraries that offer fluent APIs
Two prime examples I often use are REST Assured and WireMock. For example, REST Assured allows you to create powerful, yet highly readable tests for RESTful APIs, while abstracting away boilerplate code like this:

public void verifyCountryForZipCode() {
		body("country", equalTo("United States"));

WireMock does something similar, but for creating over-the-wire mocks:

public void setupExampleStub() {

			.withHeader("Content-Type", "application/xml")

One of the main reasons I like to work with both is that it’s so easy to read the code and explain to others what it does, without loss of power and features.

Create fluent assertions
Arguably the most important part of your test code is where the assertions are being made. Of course you need readable arrange (given) and act (when) sections too, but when you’re explaining or demonstrating your code to others, having readable assert (then) sections will be of great help, simply because that’s where the magic happens (so to say). You can make your assertions pretty much self-explanatory by using libraries such as Hamcrest when you’re using Java, or FluentAssertions when you’re working with C#. The latter even allows you to create better readable error messages when an assertion fails:

IEnumerable collection = new[] { 1, 2, 3 };
collection.Should().HaveCount(4, "because we thought we put four items in the collection"));

results in the following error message:

Expected <4> items because we thought we put four items in the collection, but found <3>.

Through the reporting
Now that you’ve got your code all cleaned up and readable, it’s time to shift our attention towards the output of the test execution: your reporting. We’ve seen the first step towards clearly readable and therefore useful reporting above: readable error messages. But good reporting is more than that. First, you need to think of the audience for your report. Who’s going to be informed by your report?

  • Is it a developer? In that case you might want to include details about the test data used, steps to reproduce the erroneous situation and detailed information about the failure (such as stack traces and screenshots) in the report. Pretty much everything that makes it easy for your devs to analyze the failure, so to say. In my experience, more is better here.
  • Is it a manager or a product owner? In that case he or she is probably only interested in the overall outcome (did the tests pass?). Maybe also in test coverage (what is it that we covered with these automated tests?). They’ll probably be less interested in stack traces, though.
  • Is it a build tool, such as Jenkins or Bamboo? In that case the highest priority is that the results are presented in a way that can be interpreted automatically by that tool. A prime example is the xUnit test output format that frameworks such as JUnit and TestNG produce. These can be picked up, parsed and presented by Jenkins, Bamboo and any other CI tool that’s worth its salt without any additional configuration or programming.

Again, think about the audience for your test automation reports, and include the information that’s valuable and useful to them. Nothing less, but nothing more either. This might require you to talk with these stakeholders. This might even require you to create more than a single report for every test run. Doesn’t matter. Have your automation talk to you and to your stakeholders by means of proper reporting.

Creating a reusable FitNesse test suite (or how lasagna beats spaghetti)

In my current project I am – aside from other test-related tasks – responsible for the development and maintenance of an automated regression test suite in FitNesse.

Before starting here in December of last year, I had no prior experience with FitNesse. I had heard of it before, though, and knew it was both a wiki and an automated test framework. I have used a couple of different wiki systems before (mainly Atlassian Confluence), but only for documentation purposes, never for test automation. As you probably know, it’s very easy to start using a wiki and to start adding content.

However, maintaining a workable structure and making sure that information can be found easily is a different question. If you start to add information to a wiki at will, you’ll soon end up with a structure that resembles something like this:

It's easy to make a wiki look like this!

This is what seemed to have happened with the existing automated test suite as well. It was working decently, but tests were unstructured, there was a lot of duplication and finding your way around was no easy task. Pages were linked to one another randomly and there were several levels of inclusion (at least three or four) in most of the tests.

Now don’t get me wrong, I like spaghetti (a lot, actually!), but it soon became pretty clear that maintaining and extending such an automated test suite was going to be a daunting, if not impossible task. And as history has taught so many people who started out with test automation: if your tests are not easily maintainable, they’ll become outdated soon and all previous efforts will be in vain.

So it became pretty clear that some major restructuring was needed to make this automated test suite maintainable and ready for the future. Even more so since the intent is to hand over the test suite to the standing organization once the project finishes (go-live is scheduled for Q3 of this year at the moment).

I have a hard time instructing others something that I don’t fully understand myself (and who doesn’t) so I knew I needed to come up with a better structure. I learned some valuable things myself along the way and I will share these lessons with you all in the remainder of this post.

Lesson 1: Take a good look at the application under test
The application under test – a supply chain suite for an online retailer – seemed to be a rather complex one at first. At second glance, however, it wasn’t that complicated a system at all.

The application mainly acts as an accumulator and distributor of information related to various order types, i.e., a message goes into the application, it does some data storage and processing, and another message (or maybe multiple messages) come out again. This means that there are basically three different types of actions that a system-level test should perform:

  1. Send a predefined input message to the system
  2. Check whether the input leads to one or more output messages
  3. Check whether the output message(s) contain(s) the correct information

That was pretty much all there was to the system. All test scenarios consisted of a combination of these three actions, where different input messages (both in message types and message data) triggered different test scenarios.

Lesson 2: Create reusable building blocks
Once I realized the above, I could start to untangle the existing test suite by identifying and isolating reusable building blocks. Each of these blocks was responsible for exactly one of the three action types I identified. This meant I could create blocks (which are essentially wiki pages) for:

  • Sending an input message to the AUT (one page for each input message type)
  • Capturing the resulting output messages (one page for each output message type)
  • Validating the contents of the output messages (one page for each output message type)

I also created pages containing message templates for the messages to be sent, and some helper pages that contained global variables and global scenarios. The latter contain FitNesse fixtures that can be used independent of a message type.

For instance, all message types contain the same header, and therefore message header validation is the same for every message independent of its type and the data it contains. Therefore only a single validation scenario is needed to be able to validate all message headers.

Lesson 3: Limit the include depth to 1 when creating tests
After all building blocks and other helper elements were in place, I could start creating actual tests by simply stringing together the required message templates and building blocks.

The resulting FitNesse test suite structure

An example test scenario where an order containing a single article is sent to and handled by a warehouse could look like this:

!include -c GlobalVariables
!include -c GlobalScenarios
!include -c OrderCreationMessageTemplateSingleArticle
!include -c SendOrderCreationMessage (action type 1)
!include -c CaptureWarehouseInstructionMessage (action type 2)
!include -c ValidateWarehouseInstructionMessage (action type 3)
!include -c OrderCompletionMessageTemplateSingleArticle
!include -c SendWarehouseOrderCompletionMessage (action type 1)
!include -c CaptureOrderCompletionFeedbackMessage (action type 2)
!include -c ValidateOrderCompletionFeedbackMessage (action type 3)

As you can see from the example, there was never more than a single level of inclusion as I didn’t allow building blocks to refer to other building blocks by inclusion. In this way, I was able to reform the existing automated test suite to something that looks a lot more like this:

With a bit of effort your automated FitNesse tests might look like this!

Fundamentally, it consists of the same ingredients as the spaghetti, but it’s far more structured and just as tasty!

As a relative FitNesse novice I am very curious to read about your experience with the tool, so please do share your lessons learned in the comments. Maybe we can create even better recipes together!

Using the Page Object Model pattern in Selenium + TestNG tests

After having introduced the Selenium + TestNG combination in my previous post, I would like to show you how to apply the Page Object Model, an often used method for improving maintainability of Selenium tests, to this setup. To do so, we need to accomplish the following steps:

  • Create Page Objects representing pages of a web application that we want to test
  • Create methods for these Page Objects that represent actions we want to perform within the pages that they represent
  • Create tests that perform these actions in the required order and performs checks that make up the test scenario
  • Run the tests as TestNG tests and inspect the results

Creating Page Objects for our test application
For this purpose, again I use the ParaBank demo application that can be found here. I’ve narrowed the scope of my tests down to just three of the pages in this application: the login page, the home page (where you end up after a successful login) and an error page (where you land after a failed login attempt). As an example, this is the code for the login page:

package com.ontestautomation.seleniumtestngpom.pages;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;

public class LoginPage {
	private WebDriver driver;
	public LoginPage(WebDriver driver) {
		this.driver = driver;
		if(!driver.getTitle().equals("ParaBank | Welcome | Online Banking")) {
	public ErrorPage incorrectLogin(String username, String password) {
		driver.findElement(By.xpath("//input[@value='Log In']")).click();
		return new ErrorPage(driver);
	public HomePage correctLogin(String username, String password) {
		driver.findElement(By.xpath("//input[@value='Log In']")).click();
		return new HomePage(driver);

It contains a constructor that returns a new instance of the LoginPage object as well as two methods that we can use in our tests: incorrectLogin, which sends us to the error page and correctLogin, which sends us to the home page. Likewise, I’ve constructed Page Objects for these two pages as well. A link to those implementations can be found at the end of this post.

Note that this code snippet isn’t optimized for maintainability – I used direct references to element properties rather than some sort of element-level abstraction, such as an Object Repository.

Creating methods that perform actions on the Page Objects
You’ve seen these for the login page in the code sample above. I’ve included similar methods for the other two pages. A good example can be seen in the implementation of the error page Page Object:

package com.ontestautomation.seleniumtestngpom.pages;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;

public class ErrorPage {
	private WebDriver driver;
	public ErrorPage(WebDriver driver) {
		this.driver = driver;
	public String getErrorText() {
		return driver.findElement(By.className("error")).getText();

By implementing a getErrorText method to retrieve the error message that is displayed on the error page, we can call this method in our actual test script. It is considered good practice to separate the implementation of your Page Objects from the actual assertions that are performed in your test script (separation of responsibilities). If you need to perform additional checks, just add a method that returns the actual value displayed on the screen to the associated page object and add assertions to the scripts where this check needs to be performed.

Create tests that perform the required actions and execute the required checks
Now that we have created both the page objects and the methods that we want to use for the checks in our test scripts, it’s time to create these test scripts. This is again pretty straightforward, as this example shows (imports removed for brevity):

package com.ontestautomation.seleniumtestngpom.tests;

public class TestNGPOM {
	WebDriver driver;
	public void setUp() {
		driver = new FirefoxDriver();
		driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
	@Test(description="Performs an unsuccessful login and checks the resulting error message")
	public void testLoginNOK(String username, String incorrectpassword) {
		LoginPage lp = new LoginPage(driver);
		ErrorPage ep = lp.incorrectLogin(username, incorrectpassword);
		Assert.assertEquals(ep.getErrorText(), "The username and password could not be verified.");
	public void tearDown() {

Note the use of the page objects and the check being performed using methods in these page object implementations – in this case the getErrorText method in the error page page object.

As we have designed our tests as Selenium + TestNG tests, we also need to define a testng.xml file that defines which tests we need to run and what parameter values the parameterized testLoginOK test takes. Again, see my previous post for more details.

<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="My first TestNG test suite" verbose="1" >
  <parameter name="username" value="john"/>
  <parameter name="password" value="demo"/>
  <test name="Login tests">
      <package name="com.ontestautomation.seleniumtestngpom.tests" />

Run the tests as TestNG tests and inspect the results
Finally, we can run our tests again by right-clicking on the testng.xml file in the Package Explorer and selecting ‘Run As > TestNG Suite’. After test execution has finished, the test results will appear in the ‘Results of running suite’ tab in Eclipse. Again, please note that using meaningful names for tests and test suites in the testng.xml file make these results much easier to read and interpret.

TestNG test results in Eclipse

An extended HTML report can be found in the test-output subdirectory of your project:

TestNG HTML test results

The Eclipse project I have used for the example described in this post, including a sample HTML report as generated by TestNG, can be downloaded here.