First steps as a test automation coach

“We want our development teams to take the next step towards adopting Continuous Delivery by giving their test automation efforts a boost.”

That was the task I was given a couple of months ago when I started a new project, this one for a well-known media company here in the Netherlands. Previously, I’ve mainly been involved in more hands-on test automation implementation projects, meaning I was usually the one designing and implementing the test automation solution (either alone or as part of a team). For this project, however, my position would be much different:

  • There were multiple development teams to be supported (the exact number changed a couple of times during the assignment, but there were at least four at any time), meaning there was no way I was able to spend enough time on the implementation of automated tests for any of those teams.
  • This was a part time assignment, since I only had 2 days per week available due to other commitments. This made it even less possible to get involved in any serious test automation implementation myself.
  • Each development team was responsible for its own line of products and could make their own decisions on the technology stack to be used (within a certain bandwidth), most of which I’ve never worked with or even heard of before (GraphQL, for example), making it even less feasible to contribute to any actual tests.

Instead, at the start of the project, we decided that I would act more as a test automation coach of sorts, leaving the creation of the test automation to the development teams (which made perfect sense for this client). Something I’d never done before, so the fact that I was given the chance to do so was a pleasant surprise. Normally, as a contractor, I’m only hired to do stuff I’m already experienced in, but I guess that through my resume and the interview I built enough trust for them to hire me for the job anyway. I’m very grateful for that.

So, what did I do?

Kickoff
As the development teams consisted of about 40 developers in total, with a wide range of levels of experience, background and preferences in technology and programming languages, we (the hiring manager, the client’s team of testers and myself) thought it would be a good idea to get them at least somewhat on the same level with regards to the concept of test automation. We did this by organizing a number of test automation awareness sessions, in which I presented my view on test automation. The focus of this presentation was mostly on the ‘why?’ and the ‘what?’ of it, because I quickly figured out that the developers themselves were perfectly capable of figuring out the ‘how?’ (an impression I get from a lot of my clients nowadays, by the way).

Taking inventory of test automation maturity
Next up was a series of interviews with all tech leads from the development teams, to see where they stood with regards to test automation, what they already did, what was lacking and what would be a good ‘next step’ to allow that team to make the transition towards Continuous Delivery. This information was shared across teams to promote knowledge sharing. You’d be surprised to find out how often teams are struggling with something that’s already been solved by another teams just a couple of yards away, without either party knowing of the situation of the other..

Test automation hackathons
The most important and most impactful part of my assignment was organizing a two-day ‘hackathon’ (for lack of a better word) for each of the teams (one team at a time). The purpose of this hackathon was to take the team away from the daily grind of developing and delivering production code and have them work on their technical debt with regards to test automation. The rules of the game:

  • Organize the hackathon in a space separate from the work floor, both to give the team the feeling that they were removed from the usual work routine as well as prevent outside interference as much as possible.
  • Organize the hackathon as a Scrum sprint of sorts, with a kickoff/planning session, show and tell/standup twice a day and a demo session and retrospective at the end.
  • Deliver working software, meaning that I’d rather have one test that works and is fully integrated into the build and deployment process than fifty tests that do not run automatically. The most difficult hurdles are never in creating more tests, once you’ve got the groundwork taken care of.
  • Focus on a subject that the teams wants to have, but does not currently have. For some teams, this was unit testing for a specific type of software, for some it was end-to-end testing, or build pipelines, and in one case production monitoring. The subject didn’t matter, as long as it had to do with software quality and it was something the team did not already do.

Results
The hackathons worked out really well. Monitoring the teams after they had completed their ‘two days of test automation’ I could see they had indeed taken a step in the right direction, being more active on the test automation front and having a better awareness of what they were working towards. Mission accomplished!

As my assignment ended after that, I can’t say anything about the long term effects, unfortunately, but I’m convinced that the testers themselves can take over the role of test (automation) coach perfectly well. I will stay in touch with the client regularly to see how they’re doing, of course.

What did I learn?
As I said, this was my first time acting more as a coach than as an engineer, so naturally I learned a lot of things myself:

  • Hackathons are a great way of improving test automation efforts for development teams. Pulling teams away from their daily grind and having them focus on improving their automation efforts is both useful and fun. I was lucky that management support was not an issue, so your mileage may vary, but my point stands.
  • I (think I) have what it takes to be a test automation coach. This was the biggest breakthrough for me personally. As a pretty introverted person who likes to play around with tools regularly, it was hard initially to step away from the keyboard and fight the urge to create tests myself, helping other people to become better at it instead. It IS the way forward for my career, though, I think, because I’ve yet again seen that there’s no one better at creating automated tests than a (good) developer. What I can bring to the table is experience and guidance as to the ‘why?’ and ‘what?’.
  • Part time projects are great in terms of flexibility, especially when you find yourself in a coaching role. You can organize a hackathon, give teams guidance points and suggestions what to work on, and come back a couple of days later and see how they’re doing, evaluate, discuss and let them take the next step.

In short, my first adventure as a test automation coach has been a great experience. I’m looking forward to the next one!

Service virtualization with Parasoft Virtualize Community Edition

A couple of years ago, I took my first steps in the world of service virtualization when I took on a project where I developed stubs (virtual assets) to speed up the testing efforts and to enable automated testing for a telecommunications service provider. The tool that was used there was Parasoft Virtualize. I had been working with the Parasoft tool set before that project, most notably with SOAtest, their functional testing solution, but never with Virtualize.

In that project, we were able to massively decrease the time needed to create test data and run tests against the system. The time required to set up a test case went down from weeks to minutes, literally, enabling the test team to test more and more often and to introduce end-to-end test automation as part of their testing efforts. That’s where my love for the service virtualization (SV) field began, pretty much.

After that, I did a couple more projects using Virtualize, and started building more experience with SV, exploring other solutions including open source ones like SpectoLabs Hoverfly or WireMock. What I quickly learned, also from my conversations with the people at Parasoft, was that a lot of organizations see the benefits (and even the need) for service virtualization. However, these organizations often get cold feet when they see the investment required to obtain a license for a commercial SV solution, which easily runs into the tens of thousands of dollars. Instead, they:

  • turn to open source solutions, a perfect choice when these offer all the features you need, or
  • start to build something themselves, which turns out to be a success much less often.

The people at Parasoft have clearly taken notice, because last week saw the official announcement of the free Community Edition of the Virtualize platform. Virtualize CE is their answer to the rise in popularity and possibilities that free or open source solutions offer. In this post, we’ll take a first look at what Virtualize CE has to offer. And that is quite a lot.

Features
Here are a couple of the most important features that Virtualize CE offers:

  • Support for HTTP and HTTPS
  • Support for literal, XML and JSON requests and responses and SOAP-based and RESTful API simulation
  • Ability to create data driven responders (more on that later)
  • Recording traffic by acting as a proxy and replaying the recorded traffic for quick and easy virtual asset creation

Of course, there are also limitations when compared to the paid version of Virtualize, some of which are:

  • Traffic is limited to 11.000 hits per day
  • No support for protocols such as JMS and MQ, nor for other traffic types such as SQL queries or EDI messages
  • No ability to run Virtualize through the command line interface or configure virtual asset behaviour through the Virtualize API
  • Support is limited to the online resources

Still, Virtualize CE has a lot to offer. Let’s take a look.

Downloading and installation
You can download Virtualize CE from the Virtualize product page, all it takes is supplying an active email address where the download link will be sent to. It’s quite a big download at 1.1 GB, but this has a reason: it includes the full version of both Virtualize and SOAtest (the Parasoft functional testing solution). This means that when you decide to upgrade to the full version, all you need is another license. No reinstalling of software required. After downloading and installing the software, you can simply unlock the CE license through the Preferences menu by entering the email address you supplied to get a download link. That’s it.

Creating a first virtual asset
As an example, I’m going to create a simulation of a simple API that returns data related to music. Virtualize either lets you create a virtual asset from scratch or generates a skeleton for it based on an API definition in, for example, WSDL, RAML or Swagger format. For this example, I’ve taken this RAML definition of a simple music management API. I won’t talk you through all the clicks and keystrokes it takes to create such an asset, but trust me, it is very straightforward and takes less than a minute. After the virtual asset skeleton has been created, you see the virtual asset definition with responders for all supported operations:

Our virtual asset with all of its responders

Even better, it is automatically deployed onto the Tomcat server running within Virtualize, meaning that our virtual asset can be accessed right away (after the Tomcat server is started, obviously). By default, the server is running on localhost at port 9080, which means that we can for example do a GET call to http://localhost:9080/Jukebox/songs (which returns a list of all songs known to the API) and see the following response:

Our first response from a Virtualize responder

This response is defined in the responder responsible for answering to the GET call to that endpoint, in the form of a predefined, canned response:

The definition of the above response

To alter the response, you can simply change the text, save your changes and the updated virtual asset will automatically be deployed to the Tomcat server.

Making the virtual asset data driven
That’s all well and good, but it gets better: we can make our virtual asset a lot more powerful by creating the responses it returns in a data driven manner. Basically, this means that response messages can be populated with contents from a data source, such as a CSV or an Excel file or even a database table (accessed via JDBC). As an example, I’ve created an internal Table data source containing songs from one of my favourite albums of the last couple of years:

Data source definition

Based on the song ID, I’d like the virtual asset to look up the corresponding record in the data source and then populate the response with the other values in that same record. This is a mechanism referred to as data source correlation in Virtualize. It is defined on the responder level, so let’s apply it to the GET responder for the http://localhost:9080/Jukebox/songs/{songId} endpoint:

Configuring data source correlation

What we define here is that whenever a GET request comes in that matches the http://localhost:9080/Jukebox/songs/{songId} endpoint, Virtualize should look up the value of the path parameter with index 2, which is {songId}, in the songId column of the Songs data source. If there’s a match, the response can be populated with other values in the same data source row. The mapping between response message fields and data source columns can be easily defined in the response configuration window:

Populating the responder message with data source values

But does it work? Let’s find out by calling http://localhost:9080/Jukebox/songs/d155b058-f51f-11e6-bc64-92361f002676 and see if the data corresponding to the track Strong (not coincidentally my favourite track of the album) is returned.

Response returned by the data driven responder

And it is! This means our data driven responder works as intended. In this way, you can easily create flexible and powerful virtual assets.

One last feature I’d like to highlight is the ability to track requests and responses. With a single click of the mouse, you can turn on Event Monitoring for individual virtual assets and see what messages are received and sent by it in the Event Details perspective, for example for logging or virtual asset debugging purposes (remember that your virtual assets need to be tested too!):

The Virtualize Event Viewer shows requests and responses

Apart from the features presented here, there’s much, much more you can do with Parasoft Virtualize CE. If you find yourself looking for a service virtualization solution that’s easy to set up and use, this is one you should definitely check out. I’d love to hear your thoughts and experiences! And as always, I’ll happily answer any questions you might have.

Creating stubs using the Hoverfly Java DSL

One of the things I like best about the option to write stubs for service virtualization in code is that by doing so, you’re able to store them in and access them through the same version control system (Git, for example) as your production code and your automated tests. I was excited when I read a blog post on the SpectoLabs website announcing that they had added a Java DSL to their most recent Hoverfly release. I’ve been keeping up with Hoverfly as a product for a while now, and it’s rapidly becoming an important player in the world of open source service virtualization solutions.

This Java DSL is somewhat similar to what WireMock offers, in that it allows you to quickly create stubs in your code, right when and where you need them. This blog post will not be a comparison between Hoverfly and WireMock, though. Both tools have some very useful features and have earned (and are stil earning) their respective place in the service virtualization space, so it’s up to you to see which of these best fits your project.

Instead, back to Hoverfly. Let’s take a look at a very basic stub definition first:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
   	service("www.afirststub.com")
       		.get("/test")
       		.willReturn(success("Success","text/plain"))
));

The syntax used to create a stub is pretty straightforward, as you can see. Here, we have defined a stub that listens at http://www.afirststub.com/test and returns a positive response, defined using the success() method, which boils down to Hoverfly returning an HTTP response with a 200 status code. The response further contains a response body with a string Success as its body and text/plain as its content type. By replacing these values with other content and content type values, you can easily create a stub that exerts the behaviour required for your specific testing needs.

As you can see, a Hoverfly stub is defined using the JUnit @ClassRule annotation. For those of you that use TestNG, you can manage the Hoverfly instance (the Hoverfly class is included in the hoverfly-java dependency) in @Before and @After classes instead.

We can check that this stub works as intended by writing and running a simple REST Assured test for it:

@Test
public void testAFirstStub() {
		
	given().
	when().
		get("http://www.afirststub.com/test").
	then().
		assertThat().
		statusCode(200).
	and().
		body(equalTo("Success"));
}

Since Hoverfly works as a proxy, it can return any data you specify, even for existing endpoints. This means that you don’t need to change existing configuration files and endpoints in your system under test when you’re running your tests, no matter whether you’re using an actual endpoint or the Hoverfly stub representation of it. A big advantage, if you ask me.

Consider the following (utterly useless) use case: the endpoint http://ergast.com/api/f1/drivers/max_verstappen.json returns data for the Formula 1 driver Max Verstappen in JSON format (you can click the link to see what data is returned). Assume we want to test what happens when the permanentNumber changes value from 33 to, say, 999, we can simply create a stub that listens at the same endpoint, but returns different data:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
       	service("ergast.com")
       		.get("/api/f1/drivers/max_verstappen.json")
       		.willReturn(success("{\"permanentNumber\": \"999\"}", "application/json"))
));

Note that I removed all other data that is returned by the original endpoint for brevity and laziness. Mostly laziness, actually. Again, a simple test shows that instead of the data returned by the real endpoint, we now get our data from the Hoverfly stub:

@Test
public void testStubFakeVerstappen() {
		
	given().
	when().
		get("http://ergast.com/api/f1/drivers/max_verstappen.json").
	then().
		assertThat().
		body("permanentNumber",equalTo("999"));
}

Apart from being quite useless, the example above also introduces an issue with defining stubs that return larger amounts of JSON data (or XML data, for that matter): since JSON is not really well supported out of the box in Java (nor is XML), we could potentially end up with a large and unwieldy string with lots of character escaping for larger response bodies. Luckily, Hoverfly offers a solution for that in the form of object (de-)serialization.

Assume we have a simple Car POJO with two fields: make and model. If we create an instance of that Car object like this:

private static Car myCar = new Car("Ford", "Focus");

and we pass this to the stub definition as follows:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
	service("www.testwithcarobject.com")
 		.get("/getmycar")
       		.willReturn(success(json(myCar)))
));

then Hoverfly will automatically serialize the Car object instance to JSON, which we can visualize by creating another REST Assured test and having it log the response body to the console:

@Test
public void testStubGetCarObject() {
		
	given().
	when().
		get("http://www.testwithcarobject.com/getmycar").
	then().
		log().
		body().
	and().
		assertThat().
		body("make",equalTo("Ford"));
}

When run, this test generates the following console output, indicating that Hoverfly successfully serialized our Car instance to JSON:

{
    "make": "Ford",
    "model": "Focus"
}

Note that the getters of the POJO need to be named correctly for this to work. For example, the getter for the make field needs to be called getMake(), or else the object will not be serialized.

The final Hoverfly feature that I’d like to demonstrate is the ability to simulate error flows by returning bad requests. This can be done simply as follows:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
       	service("www.badrequest.com")
       		.get("/req")
       		.willReturn(badRequest())
));

and can be verified by checking the status code corresponding with a bad request, which is HTTP 400, with a test:

@Test
public void testStubBadRequest() {
		
	given().
	when().
		get("http://www.badrequest.com/req").
	then().
		assertThat().
		statusCode(400);
}

Similar to the Hoverfly product in general, its Java DSL is still under construction. This post was written based on version 0.3.6 and does not reflect newer versions. I had a bit of trouble getting the code to run, initially, but the SpectoLabs team have been very responsive and helpful in resolving the questions I had and the issues I encountered.

As an end note, please be aware that the Java DSL we’ve seen in this post is just one way of using Hoverfly. For a complete overview of the features and possibilities provided by the tool, please take a look at the online documentation.

A Maven project featuring all of the examples and tests in this post can be downloaded here. Tip: you’ll need to set your Java compiler compliance level to 1.8 in order for the code to compile and run correctly.