Let your test automation talk to you

Most of my current projects involve me paving the automation way by discussing requirements and needs with stakeholders, deciding on an approach and the tools used, setting up a solution and some initial tests and then instructing others and handing over the project. I love doing this type of projects, as it allows me (who’s bored quite easily and quickly) to move from project to project and from client to client regularly (sometimes once every couple of weeks). One of the most important factors in facilitating an easy handover of ‘my’ (it isn’t really mine, of course) automation solution to those who are going to continue and expand on it is by making the automation setup as self-explanatory as possible.

This is especially so when those that are going to take over don’t have as much experience with the tools and architecture that has been decided upon (or, in some cases, that I’ve decided to use). Let’s take a look at two ways to make it easy to hand over automation (results) to your successor (stakeholder), or, as I’d like to call it, to have your automation talk to you about what it’s doing and what the results of executing the automated tests are. Not literally talk to you (though that would be nice!), but you know what I mean..

Through the code
One of the most effective ways of making your automation solution easily explainable to others is through the code. Not only should your code be well structured and maintainable, good code requires little to no comments (who’s got time or motivation to write comments anyway?) because it is self-explanatory. Some prime examples of self describing code that I’ve used or seen are:

Having your Page Object methods in Selenium return Page Objects.
When you write your Page Object Methods like this:

public LoginPage SetEmailAddressTo(string emailAddress)
    PElements.SendKeys(_driver, textfieldEmailAddress, emailAddress);
    return this;

public LoginPage SetPasswordTo(string password)
    PElements.SendKeys(_driver, textfieldPassword, password);
    return this;

public void ClickLoginButton()
    PElements.Click(_driver, buttonLogin);

you can write your tests (or your step definition implementations, if you’re using Cucumber or SpecFlow) like this:

[When(@"he logs in using his credentials")]
public void WhenHeLogsInUsingHisCredentials()
    new LoginPage().

It’s instantly clear what your tests is doing here, all by means of allowing method chaining through well-chosen method return types and expressive method names.

Choosing to work with libraries that offer fluent APIs
Two prime examples I often use are REST Assured and WireMock. For example, REST Assured allows you to create powerful, yet highly readable tests for RESTful APIs, while abstracting away boilerplate code like this:

public void verifyCountryForZipCode() {
		body("country", equalTo("United States"));

WireMock does something similar, but for creating over-the-wire mocks:

public void setupExampleStub() {

			.withHeader("Content-Type", "application/xml")

One of the main reasons I like to work with both is that it’s so easy to read the code and explain to others what it does, without loss of power and features.

Create fluent assertions
Arguably the most important part of your test code is where the assertions are being made. Of course you need readable arrange (given) and act (when) sections too, but when you’re explaining or demonstrating your code to others, having readable assert (then) sections will be of great help, simply because that’s where the magic happens (so to say). You can make your assertions pretty much self-explanatory by using libraries such as Hamcrest when you’re using Java, or FluentAssertions when you’re working with C#. The latter even allows you to create better readable error messages when an assertion fails:

IEnumerable collection = new[] { 1, 2, 3 };
collection.Should().HaveCount(4, "because we thought we put four items in the collection"));

results in the following error message:

Expected <4> items because we thought we put four items in the collection, but found <3>.

Through the reporting
Now that you’ve got your code all cleaned up and readable, it’s time to shift our attention towards the output of the test execution: your reporting. We’ve seen the first step towards clearly readable and therefore useful reporting above: readable error messages. But good reporting is more than that. First, you need to think of the audience for your report. Who’s going to be informed by your report?

  • Is it a developer? In that case you might want to include details about the test data used, steps to reproduce the erroneous situation and detailed information about the failure (such as stack traces and screenshots) in the report. Pretty much everything that makes it easy for your devs to analyze the failure, so to say. In my experience, more is better here.
  • Is it a manager or a product owner? In that case he or she is probably only interested in the overall outcome (did the tests pass?). Maybe also in test coverage (what is it that we covered with these automated tests?). They’ll probably be less interested in stack traces, though.
  • Is it a build tool, such as Jenkins or Bamboo? In that case the highest priority is that the results are presented in a way that can be interpreted automatically by that tool. A prime example is the xUnit test output format that frameworks such as JUnit and TestNG produce. These can be picked up, parsed and presented by Jenkins, Bamboo and any other CI tool that’s worth its salt without any additional configuration or programming.

Again, think about the audience for your test automation reports, and include the information that’s valuable and useful to them. Nothing less, but nothing more either. This might require you to talk with these stakeholders. This might even require you to create more than a single report for every test run. Doesn’t matter. Have your automation talk to you and to your stakeholders by means of proper reporting.

Managing and publishing test results with Tesults

A very important factor in the success of test automation is making test results readily available for everyone to see. Even the greatest set of automated tests becomes a lot less powerful if the results aren’t visible to anybody interested in the quality of the system under test. When people see that all tests pass and keep passing, it builds trust in the people working on or with an application. When tests fail, it should be immediately visible to all parties involved, so adequate actions can be taken to fix things (be it either the test itself or the system under test). Therefore, having adequate reporting in place and making test results easily accessible and clearly visible is something every development and testing team should take care of. Luckily, a lot of people reading this blog agree (at least to some extent), because the posts I’ve written in the past about creating test reports are among the most read on the site.

Recently, I’ve been introduced to a relatively new venture that attempts to make the management and publication of test results as easy as possible. Tesults is a cloud-based platform that can be used to upload and display automated test results in order to give better insight into the quality of your system under test. Recently, they released a Java wrapper that allows for easy integration of the uploading of test results to Tesults, which triggered me to try it out and see how well Tesults works. In this blog post, I’ll walk you through the process.

After creating a Tesults account (it is a commercial service with a subscription model, but it’s free for everybody to try for 7 days), you have to set up a project and a target that the test results will be associated with. For example, the project can relate to the application you’re working on, whereas the target (which is a child of a project) can be used to distinguish between branches, projects, whatever you like. For each target, Tesults creates a target string that you need to specify in your test project and which forms the connection between your test results and the Tesults project. We’ll see how this target string is used later on.

We also need a couple of tests that generate the test results that are to be uploaded to Tesults. For this, I wrote a couple of straightforward REST Assured tests: a couple of ones that should pass, one that should fail, and some data driven tests to see how Tesults handles these. As an example, here’s the test that’s supposed to fail:

@Test(description = "Test that Ayrton Senna is in the list of 2016 F1 drivers (fails)")
public void simpleDriverTestWithFailure() {
		body("MRData.DriverTable.Drivers.driverId", hasItem("senna"));

Basically, for the test results to be uploaded to Tesults, they need to be gathered in a test results object. Each test case result consists of

  • A name for the test
  • A description of the purpose of the test
  • The test suite that the test belongs to (each target can in turn have multiple test suites)
  • The set of parameters used when invoking the test method
  • A result (obviously), which can be either ‘pass’, ‘fail’ or ‘unknown’

For my demo tests, I’m creating the test results object as follows:

List<Map<String,Object>> testCases = new ArrayList<Map<String, Object>>();

public void createResults(ITestResult result) {
	Map<String, Object> testCase = new HashMap<String, Object>();
	Map<String, String> parameters = new HashMap<String, String>();
	Object[] methodParams = result.getParameters();
	if(methodParams.length > 0){
	// Add test case name, description and the suite it belongs to
	testCase.put("name", result.getMethod().getMethodName());
	testCase.put("desc", result.getMethod().getDescription());
	testCase.put("suite", "RestAssuredDemoSuite");
	// Determine test result and add it, along with the error message in case of failure
	if (result.getStatus() == ITestResult.SUCCESS) {
		testCase.put("result", "pass");
	} else {
		testCase.put("result", "fail");
		testCase.put("reason", result.getThrowable().getMessage());

So, after each test, I add the result of that test to the results object, together with additional information regarding that test. And yes, I know the above code is suboptimal (especially with regards to the parameter processing), but it works for this example.

After all tests have been executed, the test results can be uploaded to the Tesults platform with another couple of lines of code:

public void uploadResults() {
	Map<String,Object> uploadResults = new HashMap<String,Object>();
	uploadResults.put("cases", testCases);
	Map<String,Object> uploadData = new HashMap<String,Object>();
	uploadData.put("target", TOKEN);
	uploadData.put("results", uploadResults);
	Map<String, Object> uploadedOK = Results.upload(uploadData);
	System.out.println("SUCCESS: " + uploadedOK.get("success"));
	System.out.println("MESSAGE: " + uploadedOK.get("message"));

Note the use of the target string here. Again, this string associates a test results object with the right Tesults project and target. After test execution and uploading of the test results, we can visit the Tesults web site to see what our test results look like:

Tesults results overview

You can click on each test to see the individual results:

Tesults test result details

In the time I’ve used it, Tesults has shown to provide an easy to use way of uploading and publishing test results to all parties involved, thereby giving essential insight into test results and product quality. From discussions with them, I learned that the Tesults team is also considering the option to allow users to upload a file corresponding with a test case. This can be used for example to attach a screenshot of the browser state whenever a user interface-driven test fails, or to include a log file for more efficient failure analysis.

While on the subject of support and communication, the support I’ve been receiving from the Tesults people has been excellent. I had some trouble getting Tesults to work, and they’ve been very helpful when the problem was on my side and absolutely lightning fast with fixing the issues that were surfacing in their product. I hope they can keep this up as the product and its user base grows!

Having said that, it should be noted that Tesults is still a product under development, so by the time you’re reading this post, features might have been added, other features might look different, and maybe some features will have been removed entirely. I won’t be updating this post for every new feature added, changed or removed. I suggest you take a look at the Tesults documentation for an overview of the latest features.

On a closing note, I’ve mentioned earlier in this post that Tesults is a commercially licensed solution with a subscription-based revenue model. The Tesults team have told me that their main target markets is teams, projects and (smaller) organization of 10-15 up to around 100 people. For smaller teams that might not want to invest too heavily in a reporting solution, they are always open to discussing custom plans. In that case, you can contact them at enterprise@tesults.com. As I said, I’ve found the Tesults team to be extremely communicative, helpful and open to suggestions.

Disclaimer: I am in no way, shape or form associated with, nor do I have a commercial interest in Tesults as an organization or a product. I AM, however and again, of the opinion that good reporting is a critical but overlooked factor in the success of test automation. In my opinion, Tesults is an option well worth considering, given the ease of integration and the way test results are published and made available to stakeholders. I’m confident that this will only improve with time, as new features are added on a very regular basis.