Trust automation

By now, most people will be aware that the primary goal of test automation is NOT ‘finding bugs’. Sure, it’s nice if your automated tests catch a bug or two that would otherwise find its way into your production environment, but until artificial intelligence in test automation takes off (by which I mean REALLY takes off), testers will be way more adept at finding bugs than even the most sophisticated of automated testing solutions.

No, the added value of test automation is in something different: confidence. From the Oxford dictionary:

Confidence: Firm belief in the reliability, truth, or ability of someone or something.

A set of good automated tests will instill in stakeholders (including but definitely not limited to testers) the confidence that a system under test produces output or performs certain functions as previously specified (whether that’s also the intended way a system should work is a whole different discussion). Even after possibly rounds and rounds of changes, fixes, patches, refactorings and other updates.

This confidence is established by trust:

Trust: The feeling or belief that one can have faith in or rely on someone or something.

Until recently, I wasn’t aware of the subtle difference between trust and confidence. It doesn’t really help either that both terms translate to the same word in Dutch (‘vertrouwen’). Then I saw this explanation by Michalis Pavlidis:

Both concepts refer to expectations which may lapse into disappointments. However, trust is the means by which someone achieves confidence in something. Trust establishes confidence. The other way to achieve confidence is through control. So, you will feel confident in your friend that he won’t betray you if you trust him or/and if you control him.

Taking this back to software development, and to test automation in particular: confidence in the quality of a piece of software is achieved by control over its quality or by trust therein. As we’re talking about test automation here, more specifically the automated execution of predefined checks, I think it’s safe to say that we can forget about test automation being able to actively control the quality of the end product. That leaves the instilling of trust (with confidence as a result) in the system under test as a prime goal for test automation. It’s generally a bad idea to make automated checks the sole responsible entity for creating and maintaining this trust, by the way, but when applied correctly, they can certainly contribute a lot.

However, in order to enable your automated checks to build this trust, you need to trust the automated checks themselves as well. Simply (?) put: I wouldn’t be confident in the quality of a system if that quality was (even partially) assessed by automated checks I don’t trust.

Luckily, there are various ways to increase the trust in your automated checks. Here are three of them:

  • Run your automated checks early and often to see if they are fast, stable and don’t turn out to be high-maintenance drama queens.
  • Check your checks to see if they still have the trust-creation power you intended them to have when you first created them. Investigate whether applying techniques such as mutation testing can be helpful when doing so.
  • Have your automated checks communicate both their intent (what is it that is being checked?) and their result (what was the result? what went wrong?) clearly and unambiguously to the outside world. This is something I’ll definitely investigate and write about in future posts, but it should be clear that the value of automated checks that are unclear as to their intent and their result are not providing the value they should.

Test automation is great, yet it renders itself virtually worthless if it can’t be trusted. Make sure that you trust your automation before you use it to build confidence in your system under test.

Managing and publishing test results with Tesults

A very important factor in the success of test automation is making test results readily available for everyone to see. Even the greatest set of automated tests becomes a lot less powerful if the results aren’t visible to anybody interested in the quality of the system under test. When people see that all tests pass and keep passing, it builds trust in the people working on or with an application. When tests fail, it should be immediately visible to all parties involved, so adequate actions can be taken to fix things (be it either the test itself or the system under test). Therefore, having adequate reporting in place and making test results easily accessible and clearly visible is something every development and testing team should take care of. Luckily, a lot of people reading this blog agree (at least to some extent), because the posts I’ve written in the past about creating test reports are among the most read on the site.

Recently, I’ve been introduced to a relatively new venture that attempts to make the management and publication of test results as easy as possible. Tesults is a cloud-based platform that can be used to upload and display automated test results in order to give better insight into the quality of your system under test. Recently, they released a Java wrapper that allows for easy integration of the uploading of test results to Tesults, which triggered me to try it out and see how well Tesults works. In this blog post, I’ll walk you through the process.

After creating a Tesults account (it is a commercial service with a subscription model, but it’s free for everybody to try for 7 days), you have to set up a project and a target that the test results will be associated with. For example, the project can relate to the application you’re working on, whereas the target (which is a child of a project) can be used to distinguish between branches, projects, whatever you like. For each target, Tesults creates a target string that you need to specify in your test project and which forms the connection between your test results and the Tesults project. We’ll see how this target string is used later on.

We also need a couple of tests that generate the test results that are to be uploaded to Tesults. For this, I wrote a couple of straightforward REST Assured tests: a couple of ones that should pass, one that should fail, and some data driven tests to see how Tesults handles these. As an example, here’s the test that’s supposed to fail:

@Test(description = "Test that Ayrton Senna is in the list of 2016 F1 drivers (fails)")
public void simpleDriverTestWithFailure() {
		
	given().
	when().
		get("http://ergast.com/api/f1/2016/drivers.json").
	then().
		assertThat().
		body("MRData.DriverTable.Drivers.driverId", hasItem("senna"));
}

Basically, for the test results to be uploaded to Tesults, they need to be gathered in a test results object. Each test case result consists of

  • A name for the test
  • A description of the purpose of the test
  • The test suite that the test belongs to (each target can in turn have multiple test suites)
  • The set of parameters used when invoking the test method
  • A result (obviously), which can be either ‘pass’, ‘fail’ or ‘unknown’

For my demo tests, I’m creating the test results object as follows:

List<Map<String,Object>> testCases = new ArrayList<Map<String, Object>>();

@AfterMethod
public void createResults(ITestResult result) {
		
	Map<String, Object> testCase = new HashMap<String, Object>();
		
	Map<String, String> parameters = new HashMap<String, String>();
		
	Object[] methodParams = result.getParameters();
		
	if(methodParams.length > 0){
		parameters.put("circuit",methodParams[0].toString());
		parameters.put("country",methodParams[1].toString());
	}
		
	// Add test case name, description and the suite it belongs to
	testCase.put("name", result.getMethod().getMethodName());
	testCase.put("desc", result.getMethod().getDescription());
	testCase.put("params",parameters);
	testCase.put("suite", "RestAssuredDemoSuite");
				
	// Determine test result and add it, along with the error message in case of failure
	if (result.getStatus() == ITestResult.SUCCESS) {
		testCase.put("result", "pass");
	} else {
		testCase.put("result", "fail");
		testCase.put("reason", result.getThrowable().getMessage());
	}
	
	testCases.add(testCase);
}

So, after each test, I add the result of that test to the results object, together with additional information regarding that test. And yes, I know the above code is suboptimal (especially with regards to the parameter processing), but it works for this example.

After all tests have been executed, the test results can be uploaded to the Tesults platform with another couple of lines of code:

@AfterClass
public void uploadResults() {
		
	Map<String,Object> uploadResults = new HashMap<String,Object>();
	uploadResults.put("cases", testCases);
		
	Map<String,Object> uploadData = new HashMap<String,Object>();
	uploadData.put("target", TOKEN);
	uploadData.put("results", uploadResults);
		
	Map<String, Object> uploadedOK = Results.upload(uploadData);
	System.out.println("SUCCESS: " + uploadedOK.get("success"));
	System.out.println("MESSAGE: " + uploadedOK.get("message"));
}

Note the use of the target string here. Again, this string associates a test results object with the right Tesults project and target. After test execution and uploading of the test results, we can visit the Tesults web site to see what our test results look like:

Tesults results overview

You can click on each test to see the individual results:

Tesults test result details

In the time I’ve used it, Tesults has shown to provide an easy to use way of uploading and publishing test results to all parties involved, thereby giving essential insight into test results and product quality. From discussions with them, I learned that the Tesults team is also considering the option to allow users to upload a file corresponding with a test case. This can be used for example to attach a screenshot of the browser state whenever a user interface-driven test fails, or to include a log file for more efficient failure analysis.

While on the subject of support and communication, the support I’ve been receiving from the Tesults people has been excellent. I had some trouble getting Tesults to work, and they’ve been very helpful when the problem was on my side and absolutely lightning fast with fixing the issues that were surfacing in their product. I hope they can keep this up as the product and its user base grows!

Having said that, it should be noted that Tesults is still a product under development, so by the time you’re reading this post, features might have been added, other features might look different, and maybe some features will have been removed entirely. I won’t be updating this post for every new feature added, changed or removed. I suggest you take a look at the Tesults documentation for an overview of the latest features.

On a closing note, I’ve mentioned earlier in this post that Tesults is a commercially licensed solution with a subscription-based revenue model. The Tesults team have told me that their main target markets is teams, projects and (smaller) organization of 10-15 up to around 100 people. For smaller teams that might not want to invest too heavily in a reporting solution, they are always open to discussing custom plans. In that case, you can contact them at enterprise@tesults.com. As I said, I’ve found the Tesults team to be extremely communicative, helpful and open to suggestions.

Disclaimer: I am in no way, shape or form associated with, nor do I have a commercial interest in Tesults as an organization or a product. I AM, however and again, of the opinion that good reporting is a critical but overlooked factor in the success of test automation. In my opinion, Tesults is an option well worth considering, given the ease of integration and the way test results are published and made available to stakeholders. I’m confident that this will only improve with time, as new features are added on a very regular basis.

Review: Automation Guild 2017

About half a year ago, in July of 2016 to be exact, I was invited by Joe from the well-known TestTalks podcast to contribute to a new initiative he had come up with: the Automation Guild conference. Joe was looking to organize an online conference fully dedicated to test automation, and he asked me if I wanted to host a session on testing RESTful APIs with REST Assured. Even though I’d never done anything like this before -or maybe because I’d never done anything like this before- I immediately said yes. Only later realizing what it was, exactly, that I had agreed to do..

Since the conference was online and Joe was looking for the best possible experience for the Automation Guild delegates, he asked each of the speakers to record a video session in advance, including sharing of screens and writing and executing code (where relevant, of course). This being an international conference of course also meant speaking in English, which made it all the more challenging for me personally. I’m fine with speaking in English, but the experience of recording that, listening to it and editing all the ‘ermm..’s and ‘uuuhh’s out was something entirely new, and not exclusively pleasant either! It also took me a lot longer than expected, but in the end, I was fairly happy with the result. And I learned a lot in the process, from English pronunciation to video editing, so it was definitely not all bad!

Enough about that, back to the conference. It was held last week, January 9th to 13th, with around 5 sessions every day plus a couple of keynotes. The actual videos were released beforehand so all attendees could watch them when it best suited their schedule, while on the conference days there were live Q&A sessions with all of the speakers to create a live and interactive atmosphere. Having never participated in anything similar, and even though I caught only a couple of sessions due to other obligations (the time zone difference didn’t help either) I think this worked remarkably well.

My own Q&A session flew by too, with a lot of questions ranging from the fairly straightforward to the pretty complex. These questions did not just cover the contents of my session, but also API testing in general and there were some questions about service virtualization as well, which made it an even more interesting half hour.

I liked this interactive Q&A part of my talk and of the conference as a whole a lot, since getting good questions meant that the stuff I talked about hit home with the listeners. I’ve had conference talks before where the audience was suspiciously quiet afterwards, and that’s neither a good thing nor an agreeable experience, I can tell you. But in this case, there were a lot of questions, and we didn’t even get to all of them during the Q&A. If all goes well, I should receive them later on and get to interact with a couple more listeners in that way. But even so far, I had an amazing time talking to Joe and (indirectly) to the attendees and answering their questions in the best way I could.

As for the other speakers, Joe managed to create a world-class lineup of speakers, and I’m quite proud to have been a part of the speaker list. I never thought I’d be in a conference lineup together with John Sonmez, Alan Richardson, Seb Rose, Matt Wynne and so many other recognized names in the testing and test automation field. So far, I only managed to watch a couple of the other speakers’ sessions, but luckily, all of them are available for a year after the end of the conference, so I’ll definitely watch more of them when time permits in a couple of weeks.

I can only speak for myself, but I think that the inaugural edition of Automation Guild was a big success, given such an incredible lineup and over 750 registered attendees. This is mostly due to the massive amount of effort Joe has put into setting this up. I can’t even begin to imagine how much time it must have cost him. Having said that, I am already looking forward to the second edition next year. If not as a second-time contributor, then surely as an attendee! If you missed or couldn’t make the conference, then mark your agenda for next year, because surely you don’t want to miss it again!