Managing and publishing test results with Tesults

A very important factor in the success of test automation is making test results readily available for everyone to see. Even the greatest set of automated tests becomes a lot less powerful if the results aren’t visible to anybody interested in the quality of the system under test. When people see that all tests pass and keep passing, it builds trust in the people working on or with an application. When tests fail, it should be immediately visible to all parties involved, so adequate actions can be taken to fix things (be it either the test itself or the system under test). Therefore, having adequate reporting in place and making test results easily accessible and clearly visible is something every development and testing team should take care of. Luckily, a lot of people reading this blog agree (at least to some extent), because the posts I’ve written in the past about creating test reports are among the most read on the site.

Recently, I’ve been introduced to a relatively new venture that attempts to make the management and publication of test results as easy as possible. Tesults is a cloud-based platform that can be used to upload and display automated test results in order to give better insight into the quality of your system under test. Recently, they released a Java wrapper that allows for easy integration of the uploading of test results to Tesults, which triggered me to try it out and see how well Tesults works. In this blog post, I’ll walk you through the process.

After creating a Tesults account (it is a commercial service with a subscription model, but it’s free for everybody to try for 7 days), you have to set up a project and a target that the test results will be associated with. For example, the project can relate to the application you’re working on, whereas the target (which is a child of a project) can be used to distinguish between branches, projects, whatever you like. For each target, Tesults creates a target string that you need to specify in your test project and which forms the connection between your test results and the Tesults project. We’ll see how this target string is used later on.

We also need a couple of tests that generate the test results that are to be uploaded to Tesults. For this, I wrote a couple of straightforward REST Assured tests: a couple of ones that should pass, one that should fail, and some data driven tests to see how Tesults handles these. As an example, here’s the test that’s supposed to fail:

@Test(description = "Test that Ayrton Senna is in the list of 2016 F1 drivers (fails)")
public void simpleDriverTestWithFailure() {
		
	given().
	when().
		get("http://ergast.com/api/f1/2016/drivers.json").
	then().
		assertThat().
		body("MRData.DriverTable.Drivers.driverId", hasItem("senna"));
}

Basically, for the test results to be uploaded to Tesults, they need to be gathered in a test results object. Each test case result consists of

  • A name for the test
  • A description of the purpose of the test
  • The test suite that the test belongs to (each target can in turn have multiple test suites)
  • The set of parameters used when invoking the test method
  • A result (obviously), which can be either ‘pass’, ‘fail’ or ‘unknown’

For my demo tests, I’m creating the test results object as follows:

List<Map<String,Object>> testCases = new ArrayList<Map<String, Object>>();

@AfterMethod
public void createResults(ITestResult result) {
		
	Map<String, Object> testCase = new HashMap<String, Object>();
		
	Map<String, String> parameters = new HashMap<String, String>();
		
	Object[] methodParams = result.getParameters();
		
	if(methodParams.length > 0){
		parameters.put("circuit",methodParams[0].toString());
		parameters.put("country",methodParams[1].toString());
	}
		
	// Add test case name, description and the suite it belongs to
	testCase.put("name", result.getMethod().getMethodName());
	testCase.put("desc", result.getMethod().getDescription());
	testCase.put("params",parameters);
	testCase.put("suite", "RestAssuredDemoSuite");
				
	// Determine test result and add it, along with the error message in case of failure
	if (result.getStatus() == ITestResult.SUCCESS) {
		testCase.put("result", "pass");
	} else {
		testCase.put("result", "fail");
		testCase.put("reason", result.getThrowable().getMessage());
	}
	
	testCases.add(testCase);
}

So, after each test, I add the result of that test to the results object, together with additional information regarding that test. And yes, I know the above code is suboptimal (especially with regards to the parameter processing), but it works for this example.

After all tests have been executed, the test results can be uploaded to the Tesults platform with another couple of lines of code:

@AfterClass
public void uploadResults() {
		
	Map<String,Object> uploadResults = new HashMap<String,Object>();
	uploadResults.put("cases", testCases);
		
	Map<String,Object> uploadData = new HashMap<String,Object>();
	uploadData.put("target", TOKEN);
	uploadData.put("results", uploadResults);
		
	Map<String, Object> uploadedOK = Results.upload(uploadData);
	System.out.println("SUCCESS: " + uploadedOK.get("success"));
	System.out.println("MESSAGE: " + uploadedOK.get("message"));
}

Note the use of the target string here. Again, this string associates a test results object with the right Tesults project and target. After test execution and uploading of the test results, we can visit the Tesults web site to see what our test results look like:

Tesults results overview

You can click on each test to see the individual results:

Tesults test result details

In the time I’ve used it, Tesults has shown to provide an easy to use way of uploading and publishing test results to all parties involved, thereby giving essential insight into test results and product quality. From discussions with them, I learned that the Tesults team is also considering the option to allow users to upload a file corresponding with a test case. This can be used for example to attach a screenshot of the browser state whenever a user interface-driven test fails, or to include a log file for more efficient failure analysis.

While on the subject of support and communication, the support I’ve been receiving from the Tesults people has been excellent. I had some trouble getting Tesults to work, and they’ve been very helpful when the problem was on my side and absolutely lightning fast with fixing the issues that were surfacing in their product. I hope they can keep this up as the product and its user base grows!

Having said that, it should be noted that Tesults is still a product under development, so by the time you’re reading this post, features might have been added, other features might look different, and maybe some features will have been removed entirely. I won’t be updating this post for every new feature added, changed or removed. I suggest you take a look at the Tesults documentation for an overview of the latest features.

On a closing note, I’ve mentioned earlier in this post that Tesults is a commercially licensed solution with a subscription-based revenue model. The Tesults team have told me that their main target markets is teams, projects and (smaller) organization of 10-15 up to around 100 people. For smaller teams that might not want to invest too heavily in a reporting solution, they are always open to discussing custom plans. In that case, you can contact them at enterprise@tesults.com. As I said, I’ve found the Tesults team to be extremely communicative, helpful and open to suggestions.

Disclaimer: I am in no way, shape or form associated with, nor do I have a commercial interest in Tesults as an organization or a product. I AM, however and again, of the opinion that good reporting is a critical but overlooked factor in the success of test automation. In my opinion, Tesults is an option well worth considering, given the ease of integration and the way test results are published and made available to stakeholders. I’m confident that this will only improve with time, as new features are added on a very regular basis.

Review: Automation Guild 2017

About half a year ago, in July of 2016 to be exact, I was invited by Joe from the well-known TestTalks podcast to contribute to a new initiative he had come up with: the Automation Guild conference. Joe was looking to organize an online conference fully dedicated to test automation, and he asked me if I wanted to host a session on testing RESTful APIs with REST Assured. Even though I’d never done anything like this before -or maybe because I’d never done anything like this before- I immediately said yes. Only later realizing what it was, exactly, that I had agreed to do..

Since the conference was online and Joe was looking for the best possible experience for the Automation Guild delegates, he asked each of the speakers to record a video session in advance, including sharing of screens and writing and executing code (where relevant, of course). This being an international conference of course also meant speaking in English, which made it all the more challenging for me personally. I’m fine with speaking in English, but the experience of recording that, listening to it and editing all the ‘ermm..’s and ‘uuuhh’s out was something entirely new, and not exclusively pleasant either! It also took me a lot longer than expected, but in the end, I was fairly happy with the result. And I learned a lot in the process, from English pronunciation to video editing, so it was definitely not all bad!

Enough about that, back to the conference. It was held last week, January 9th to 13th, with around 5 sessions every day plus a couple of keynotes. The actual videos were released beforehand so all attendees could watch them when it best suited their schedule, while on the conference days there were live Q&A sessions with all of the speakers to create a live and interactive atmosphere. Having never participated in anything similar, and even though I caught only a couple of sessions due to other obligations (the time zone difference didn’t help either) I think this worked remarkably well.

My own Q&A session flew by too, with a lot of questions ranging from the fairly straightforward to the pretty complex. These questions did not just cover the contents of my session, but also API testing in general and there were some questions about service virtualization as well, which made it an even more interesting half hour.

I liked this interactive Q&A part of my talk and of the conference as a whole a lot, since getting good questions meant that the stuff I talked about hit home with the listeners. I’ve had conference talks before where the audience was suspiciously quiet afterwards, and that’s neither a good thing nor an agreeable experience, I can tell you. But in this case, there were a lot of questions, and we didn’t even get to all of them during the Q&A. If all goes well, I should receive them later on and get to interact with a couple more listeners in that way. But even so far, I had an amazing time talking to Joe and (indirectly) to the attendees and answering their questions in the best way I could.

As for the other speakers, Joe managed to create a world-class lineup of speakers, and I’m quite proud to have been a part of the speaker list. I never thought I’d be in a conference lineup together with John Sonmez, Alan Richardson, Seb Rose, Matt Wynne and so many other recognized names in the testing and test automation field. So far, I only managed to watch a couple of the other speakers’ sessions, but luckily, all of them are available for a year after the end of the conference, so I’ll definitely watch more of them when time permits in a couple of weeks.

I can only speak for myself, but I think that the inaugural edition of Automation Guild was a big success, given such an incredible lineup and over 750 registered attendees. This is mostly due to the massive amount of effort Joe has put into setting this up. I can’t even begin to imagine how much time it must have cost him. Having said that, I am already looking forward to the second edition next year. If not as a second-time contributor, then surely as an attendee! If you missed or couldn’t make the conference, then mark your agenda for next year, because surely you don’t want to miss it again!

Why I don’t call myself a tester, or: defining what I do

Warning: rambling ahead!

Recently, I’ve been forced to think about roles in my profession, and what role it is exactly that I fulfill when at work. ‘Forced’ sounds way more negative than it is, by the way.. In fact, I think it has been a good thing, because it has helped me to think and better define what it is I actually do… So, thanks to Richard:

and Maaret (she’s quoting this blog post, by the way):

for making me think about the restrictive activity of pinning roles and labels on people. Nobody conforms 100% to the definition of a given role, should such a definition ever be created and agreed upon by everybody (good luck with that!). On the other hand, being able to explain what I do and what value I can add to a person, a team or an organization is something I can’t really do without, especially as a freelance contractor. And that, sadly, involves roles and labels, because that’s mainly how the keyword-driven IT freelancing and contracting business works.

So far, I’ve mainly defined what I do by referring to what I’m not, something you can see in the tweets I referred to above:

  • I am not a developer, at least not when compared to what the majority of people think of when they think of a developer. For someone involved in test automation (which is a form of software development), I write a shockingly low amount of code. I can’t even remember exactly when the last time was I wrote code that was used in actual automated test execution. Somewhere at the beginning of last year, I think…
  • I am not a tester. The last time I evaluated and experimented with an application in order to assess its quality and usefulness has been years ago. I think it involved TMap test design techniques and rather large Excel sheets…
  • I don’t fit into an Agile team. This is the most recent realization I’ve made. A lot of projects that come my way ask me to be a member of an Agile team, contributing to testing, test automation and possibly some other tasks. That just doesn’t fit with what I like to do or how I like to work. For starters, I think it’s kind of hard to be a member of an Agile team when you’re only there two days a week. You just miss too much. But I like to work on multiple projects at the same time and also have some time left for training, business development and other fun stuff. Unfortunately, the freelance project market here largely consists of projects for 32-40 hours a week, with Agile being a very popular way of working..

On the other hand:

  • I work a lot with developers, helping them to realize WHY they need to spend time creating automated tests and WHAT approach might work best. I gladly leave the HOW to them, since they’ll run circles around me with their development skills. This requires me to stay sort of up to date with tools, approaches and practices in software development, even though I don’t write code (often).
  • I work a lot with testers, helping them to think properly about what test automation can do for them and how to ‘sell’ it to other stakeholders. This requires me to stay up to date with how testers think and work, even though I don’t test myself.
  • I work a lot with Agile teams, helping them to create and especially test software more efficiently through smart application of tools. This requires me to stay up to date with team dynamics, Agile practices and trends, even though I haven’t been contributing to daily standups and sprint planning sessions for a while.

But I still have a hard time defining exactly what it IS that I do! My LinkedIn tagline says this:

Test automation | Service virtualization | Consultant | Trainer | Writer | Speaker

Those probably come closest to what I do. Most importantly, it states in what fields I am active (test automation and service virtualization). What other people call consulting makes up for most of my time spent working at the moment (I’d like that to change a little, but that’s a different story). But I don’t like the word ‘consultant’ or ‘consulting’. It makes me think too much of people with big mouths, expensive suits and six months of actual experience. And I don’t wear expensive suits. Or suits in general.

The rest of my time is spent training a little (hopefully more in the near future), writing a little (same goes for this) and speaking a little (and for this too). Yet, I don’t consider myself a trainer, a writer or a speaker. But maybe I am all of them. As well as a developer. And a tester. I don’t know.

Long story short, what I think it boils down to is that I help people, teams and organizations perform better in the areas of test automation and service virtualization. Areas that are of course not to be considered as goals of their own, but rather as servant to the larger scale goals of more effective testing, software development and doing business in general.

I think the key word here is ‘help’. I like to help. It isn’t about me. It shouldn’t be. If it looks like it is about me, please say so, because I’d like to avoid that by all means possible. As a start, next week I’ll talk test automation or service virtualization again.

Oh, and of course happy 2017 to all of you! I’m very much looking forward to helping people even more this year.