On elegance

For the last couple of weeks, I’ve been reading up on the works of one of the most famous Dutch computer scientists of all time: Edsger W. Dijkstra. Being from the same country and sharing a last name (we’re not related as far as I know, though), of course I’ve heard of him and the importance of his work to the computer science field, but it wasn’t until I saw a short documentary on his life (it is in Dutch, so probably useless to most of you) that I got interested in his work. Considering the subject of this blog, his views on software testing are of course what interests me the most, but there is more..

Most of you have probably heard what is perhaps his best known quote related to software testing:

Program testing can be used to show the presence of bugs, but never to show their absence.

However, that’s not what I wanted to write about in this blog post. Dijkstra has left his legacy by means of a large number of manuscripts, collectively known as the EWDs (named for the fact that he signed them with his initials). For those of you that have a little spare time left, you can read all transcribed EWDs here.

I was particularly triggered by a quote of Dijkstra that featured in the documentary, and which is included in EWD 1284:

After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure.

While Dijkstra was referring to solutions to computer science-related problems in general, and to software development and programming structures in particular, it struck me that this quote applies quite well to test automation too.

Here’s the definition of ‘elegance’ from the Oxford Dictionary:

  1. The quality of being graceful and stylish in appearance or manner.
  2. The quality of being pleasingly ingenious and simple.

To me, this is what defines a ‘good’ solution to any test automation problem: one that is stylish, ingenious and simple. Too often, I’m seeing solutions proposed (or answers to questions given) that are overly complicated, unstructured, not well-thought through or just plain horrible. And, granted, I’ve been guilty of doing the same myself, and I’m probably still doing so from time to time. But as of now, I know at least one of the characteristics a good solution should adhere to: elegance.

Dijkstra also gives two typical characteristics of elegant solutions:

An elegant solution is shorter than most of its alternatives.

This is why I like tools such as REST Assured and WireMock so much: the code required to define tests c.q. mocks in these tools is short, readable and powerful: elegance at work.

An elegant solution is constructed out of logically divided components, each of which can be easily swapped out for an alternative without influencing the rest of the solution too much.

This is a standard that every test automation solution should be held up to: if any of the components fails, is no longer maintained or is being outperformed by an alternative, how much effort does it require to swap out that component?

As a last nugget of wisdom from Dijkstra, here’s another of his quotes on elegance:

Elegant solutions are often the most efficient.

And isn’t efficiency something we all should strive for when creating automated tests? It is definitely something I’ll try and pay more attention to in the future. Along with effectiveness, that is, something that’s maybe even more important.

However, according to my namesake, I am (and probably most of you are) failing to meet his standards in all of the projects that we’re working on. As he states:

For those who have wondered: I don’t think object-oriented programming is a structuring paradigm that meets my standards of elegance.

Ah well..

Trust automation

By now, most people will be aware that the primary goal of test automation is NOT ‘finding bugs’. Sure, it’s nice if your automated tests catch a bug or two that would otherwise find its way into your production environment, but until artificial intelligence in test automation takes off (by which I mean REALLY takes off), testers will be way more adept at finding bugs than even the most sophisticated of automated testing solutions.

No, the added value of test automation is in something different: confidence. From the Oxford dictionary:

Confidence: Firm belief in the reliability, truth, or ability of someone or something.

A set of good automated tests will instill in stakeholders (including but definitely not limited to testers) the confidence that a system under test produces output or performs certain functions as previously specified (whether that’s also the intended way a system should work is a whole different discussion). Even after possibly rounds and rounds of changes, fixes, patches, refactorings and other updates.

This confidence is established by trust:

Trust: The feeling or belief that one can have faith in or rely on someone or something.

Until recently, I wasn’t aware of the subtle difference between trust and confidence. It doesn’t really help either that both terms translate to the same word in Dutch (‘vertrouwen’). Then I saw this explanation by Michalis Pavlidis:

Both concepts refer to expectations which may lapse into disappointments. However, trust is the means by which someone achieves confidence in something. Trust establishes confidence. The other way to achieve confidence is through control. So, you will feel confident in your friend that he won’t betray you if you trust him or/and if you control him.

Taking this back to software development, and to test automation in particular: confidence in the quality of a piece of software is achieved by control over its quality or by trust therein. As we’re talking about test automation here, more specifically the automated execution of predefined checks, I think it’s safe to say that we can forget about test automation being able to actively control the quality of the end product. That leaves the instilling of trust (with confidence as a result) in the system under test as a prime goal for test automation. It’s generally a bad idea to make automated checks the sole responsible entity for creating and maintaining this trust, by the way, but when applied correctly, they can certainly contribute a lot.

However, in order to enable your automated checks to build this trust, you need to trust the automated checks themselves as well. Simply (?) put: I wouldn’t be confident in the quality of a system if that quality was (even partially) assessed by automated checks I don’t trust.

Luckily, there are various ways to increase the trust in your automated checks. Here are three of them:

  • Run your automated checks early and often to see if they are fast, stable and don’t turn out to be high-maintenance drama queens.
  • Check your checks to see if they still have the trust-creation power you intended them to have when you first created them. Investigate whether applying techniques such as mutation testing can be helpful when doing so.
  • Have your automated checks communicate both their intent (what is it that is being checked?) and their result (what was the result? what went wrong?) clearly and unambiguously to the outside world. This is something I’ll definitely investigate and write about in future posts, but it should be clear that the value of automated checks that are unclear as to their intent and their result are not providing the value they should.

Test automation is great, yet it renders itself virtually worthless if it can’t be trusted. Make sure that you trust your automation before you use it to build confidence in your system under test.

Managing and publishing test results with Tesults

A very important factor in the success of test automation is making test results readily available for everyone to see. Even the greatest set of automated tests becomes a lot less powerful if the results aren’t visible to anybody interested in the quality of the system under test. When people see that all tests pass and keep passing, it builds trust in the people working on or with an application. When tests fail, it should be immediately visible to all parties involved, so adequate actions can be taken to fix things (be it either the test itself or the system under test). Therefore, having adequate reporting in place and making test results easily accessible and clearly visible is something every development and testing team should take care of. Luckily, a lot of people reading this blog agree (at least to some extent), because the posts I’ve written in the past about creating test reports are among the most read on the site.

Recently, I’ve been introduced to a relatively new venture that attempts to make the management and publication of test results as easy as possible. Tesults is a cloud-based platform that can be used to upload and display automated test results in order to give better insight into the quality of your system under test. Recently, they released a Java wrapper that allows for easy integration of the uploading of test results to Tesults, which triggered me to try it out and see how well Tesults works. In this blog post, I’ll walk you through the process.

After creating a Tesults account (it is a commercial service with a subscription model, but it’s free for everybody to try for 7 days), you have to set up a project and a target that the test results will be associated with. For example, the project can relate to the application you’re working on, whereas the target (which is a child of a project) can be used to distinguish between branches, projects, whatever you like. For each target, Tesults creates a target string that you need to specify in your test project and which forms the connection between your test results and the Tesults project. We’ll see how this target string is used later on.

We also need a couple of tests that generate the test results that are to be uploaded to Tesults. For this, I wrote a couple of straightforward REST Assured tests: a couple of ones that should pass, one that should fail, and some data driven tests to see how Tesults handles these. As an example, here’s the test that’s supposed to fail:

@Test(description = "Test that Ayrton Senna is in the list of 2016 F1 drivers (fails)")
public void simpleDriverTestWithFailure() {
		
	given().
	when().
		get("http://ergast.com/api/f1/2016/drivers.json").
	then().
		assertThat().
		body("MRData.DriverTable.Drivers.driverId", hasItem("senna"));
}

Basically, for the test results to be uploaded to Tesults, they need to be gathered in a test results object. Each test case result consists of

  • A name for the test
  • A description of the purpose of the test
  • The test suite that the test belongs to (each target can in turn have multiple test suites)
  • The set of parameters used when invoking the test method
  • A result (obviously), which can be either ‘pass’, ‘fail’ or ‘unknown’

For my demo tests, I’m creating the test results object as follows:

List<Map<String,Object>> testCases = new ArrayList<Map<String, Object>>();

@AfterMethod
public void createResults(ITestResult result) {
		
	Map<String, Object> testCase = new HashMap<String, Object>();
		
	Map<String, String> parameters = new HashMap<String, String>();
		
	Object[] methodParams = result.getParameters();
		
	if(methodParams.length > 0){
		parameters.put("circuit",methodParams[0].toString());
		parameters.put("country",methodParams[1].toString());
	}
		
	// Add test case name, description and the suite it belongs to
	testCase.put("name", result.getMethod().getMethodName());
	testCase.put("desc", result.getMethod().getDescription());
	testCase.put("params",parameters);
	testCase.put("suite", "RestAssuredDemoSuite");
				
	// Determine test result and add it, along with the error message in case of failure
	if (result.getStatus() == ITestResult.SUCCESS) {
		testCase.put("result", "pass");
	} else {
		testCase.put("result", "fail");
		testCase.put("reason", result.getThrowable().getMessage());
	}
	
	testCases.add(testCase);
}

So, after each test, I add the result of that test to the results object, together with additional information regarding that test. And yes, I know the above code is suboptimal (especially with regards to the parameter processing), but it works for this example.

After all tests have been executed, the test results can be uploaded to the Tesults platform with another couple of lines of code:

@AfterClass
public void uploadResults() {
		
	Map<String,Object> uploadResults = new HashMap<String,Object>();
	uploadResults.put("cases", testCases);
		
	Map<String,Object> uploadData = new HashMap<String,Object>();
	uploadData.put("target", TOKEN);
	uploadData.put("results", uploadResults);
		
	Map<String, Object> uploadedOK = Results.upload(uploadData);
	System.out.println("SUCCESS: " + uploadedOK.get("success"));
	System.out.println("MESSAGE: " + uploadedOK.get("message"));
}

Note the use of the target string here. Again, this string associates a test results object with the right Tesults project and target. After test execution and uploading of the test results, we can visit the Tesults web site to see what our test results look like:

Tesults results overview

You can click on each test to see the individual results:

Tesults test result details

In the time I’ve used it, Tesults has shown to provide an easy to use way of uploading and publishing test results to all parties involved, thereby giving essential insight into test results and product quality. From discussions with them, I learned that the Tesults team is also considering the option to allow users to upload a file corresponding with a test case. This can be used for example to attach a screenshot of the browser state whenever a user interface-driven test fails, or to include a log file for more efficient failure analysis.

While on the subject of support and communication, the support I’ve been receiving from the Tesults people has been excellent. I had some trouble getting Tesults to work, and they’ve been very helpful when the problem was on my side and absolutely lightning fast with fixing the issues that were surfacing in their product. I hope they can keep this up as the product and its user base grows!

Having said that, it should be noted that Tesults is still a product under development, so by the time you’re reading this post, features might have been added, other features might look different, and maybe some features will have been removed entirely. I won’t be updating this post for every new feature added, changed or removed. I suggest you take a look at the Tesults documentation for an overview of the latest features.

On a closing note, I’ve mentioned earlier in this post that Tesults is a commercially licensed solution with a subscription-based revenue model. The Tesults team have told me that their main target markets is teams, projects and (smaller) organization of 10-15 up to around 100 people. For smaller teams that might not want to invest too heavily in a reporting solution, they are always open to discussing custom plans. In that case, you can contact them at enterprise@tesults.com. As I said, I’ve found the Tesults team to be extremely communicative, helpful and open to suggestions.

Disclaimer: I am in no way, shape or form associated with, nor do I have a commercial interest in Tesults as an organization or a product. I AM, however and again, of the opinion that good reporting is a critical but overlooked factor in the success of test automation. In my opinion, Tesults is an option well worth considering, given the ease of integration and the way test results are published and made available to stakeholders. I’m confident that this will only improve with time, as new features are added on a very regular basis.