Service virtualization with Parasoft Virtualize Community Edition

A couple of years ago, I took my first steps in the world of service virtualization when I took on a project where I developed stubs (virtual assets) to speed up the testing efforts and to enable automated testing for a telecommunications service provider. The tool that was used there was Parasoft Virtualize. I had been working with the Parasoft tool set before that project, most notably with SOAtest, their functional testing solution, but never with Virtualize.

In that project, we were able to massively decrease the time needed to create test data and run tests against the system. The time required to set up a test case went down from weeks to minutes, literally, enabling the test team to test more and more often and to introduce end-to-end test automation as part of their testing efforts. That’s where my love for the service virtualization (SV) field began, pretty much.

After that, I did a couple more projects using Virtualize, and started building more experience with SV, exploring other solutions including open source ones like SpectoLabs Hoverfly or WireMock. What I quickly learned, also from my conversations with the people at Parasoft, was that a lot of organizations see the benefits (and even the need) for service virtualization. However, these organizations often get cold feet when they see the investment required to obtain a license for a commercial SV solution, which easily runs into the tens of thousands of dollars. Instead, they:

  • turn to open source solutions, a perfect choice when these offer all the features you need, or
  • start to build something themselves, which turns out to be a success much less often.

The people at Parasoft have clearly taken notice, because last week saw the official announcement of the free Community Edition of the Virtualize platform. Virtualize CE is their answer to the rise in popularity and possibilities that free or open source solutions offer. In this post, we’ll take a first look at what Virtualize CE has to offer. And that is quite a lot.

Features
Here are a couple of the most important features that Virtualize CE offers:

  • Support for HTTP and HTTPS
  • Support for literal, XML and JSON requests and responses and SOAP-based and RESTful API simulation
  • Ability to create data driven responders (more on that later)
  • Recording traffic by acting as a proxy and replaying the recorded traffic for quick and easy virtual asset creation

Of course, there are also limitations when compared to the paid version of Virtualize, some of which are:

  • Traffic is limited to 11.000 hits per day
  • No support for protocols such as JMS and MQ, nor for other traffic types such as SQL queries or EDI messages
  • No ability to run Virtualize through the command line interface or configure virtual asset behaviour through the Virtualize API
  • Support is limited to the online resources

Still, Virtualize CE has a lot to offer. Let’s take a look.

Downloading and installation
You can download Virtualize CE from the Virtualize product page, all it takes is supplying an active email address where the download link will be sent to. It’s quite a big download at 1.1 GB, but this has a reason: it includes the full version of both Virtualize and SOAtest (the Parasoft functional testing solution). This means that when you decide to upgrade to the full version, all you need is another license. No reinstalling of software required. After downloading and installing the software, you can simply unlock the CE license through the Preferences menu by entering the email address you supplied to get a download link. That’s it.

Creating a first virtual asset
As an example, I’m going to create a simulation of a simple API that returns data related to music. Virtualize either lets you create a virtual asset from scratch or generates a skeleton for it based on an API definition in, for example, WSDL, RAML or Swagger format. For this example, I’ve taken this RAML definition of a simple music management API. I won’t talk you through all the clicks and keystrokes it takes to create such an asset, but trust me, it is very straightforward and takes less than a minute. After the virtual asset skeleton has been created, you see the virtual asset definition with responders for all supported operations:

Our virtual asset with all of its responders

Even better, it is automatically deployed onto the Tomcat server running within Virtualize, meaning that our virtual asset can be accessed right away (after the Tomcat server is started, obviously). By default, the server is running on localhost at port 9080, which means that we can for example do a GET call to http://localhost:9080/Jukebox/songs (which returns a list of all songs known to the API) and see the following response:

Our first response from a Virtualize responder

This response is defined in the responder responsible for answering to the GET call to that endpoint, in the form of a predefined, canned response:

The definition of the above response

To alter the response, you can simply change the text, save your changes and the updated virtual asset will automatically be deployed to the Tomcat server.

Making the virtual asset data driven
That’s all well and good, but it gets better: we can make our virtual asset a lot more powerful by creating the responses it returns in a data driven manner. Basically, this means that response messages can be populated with contents from a data source, such as a CSV or an Excel file or even a database table (accessed via JDBC). As an example, I’ve created an internal Table data source containing songs from one of my favourite albums of the last couple of years:

Data source definition

Based on the song ID, I’d like the virtual asset to look up the corresponding record in the data source and then populate the response with the other values in that same record. This is a mechanism referred to as data source correlation in Virtualize. It is defined on the responder level, so let’s apply it to the GET responder for the http://localhost:9080/Jukebox/songs/{songId} endpoint:

Configuring data source correlation

What we define here is that whenever a GET request comes in that matches the http://localhost:9080/Jukebox/songs/{songId} endpoint, Virtualize should look up the value of the path parameter with index 2, which is {songId}, in the songId column of the Songs data source. If there’s a match, the response can be populated with other values in the same data source row. The mapping between response message fields and data source columns can be easily defined in the response configuration window:

Populating the responder message with data source values

But does it work? Let’s find out by calling http://localhost:9080/Jukebox/songs/d155b058-f51f-11e6-bc64-92361f002676 and see if the data corresponding to the track Strong (not coincidentally my favourite track of the album) is returned.

Response returned by the data driven responder

And it is! This means our data driven responder works as intended. In this way, you can easily create flexible and powerful virtual assets.

One last feature I’d like to highlight is the ability to track requests and responses. With a single click of the mouse, you can turn on Event Monitoring for individual virtual assets and see what messages are received and sent by it in the Event Details perspective, for example for logging or virtual asset debugging purposes (remember that your virtual assets need to be tested too!):

The Virtualize Event Viewer shows requests and responses

Apart from the features presented here, there’s much, much more you can do with Parasoft Virtualize CE. If you find yourself looking for a service virtualization solution that’s easy to set up and use, this is one you should definitely check out. I’d love to hear your thoughts and experiences! And as always, I’ll happily answer any questions you might have.

Creating stubs using the Hoverfly Java DSL

One of the things I like best about the option to write stubs for service virtualization in code is that by doing so, you’re able to store them in and access them through the same version control system (Git, for example) as your production code and your automated tests. I was excited when I read a blog post on the SpectoLabs website announcing that they had added a Java DSL to their most recent Hoverfly release. I’ve been keeping up with Hoverfly as a product for a while now, and it’s rapidly becoming an important player in the world of open source service virtualization solutions.

This Java DSL is somewhat similar to what WireMock offers, in that it allows you to quickly create stubs in your code, right when and where you need them. This blog post will not be a comparison between Hoverfly and WireMock, though. Both tools have some very useful features and have earned (and are stil earning) their respective place in the service virtualization space, so it’s up to you to see which of these best fits your project.

Instead, back to Hoverfly. Let’s take a look at a very basic stub definition first:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
   	service("www.afirststub.com")
       		.get("/test")
       		.willReturn(success("Success","text/plain"))
));

The syntax used to create a stub is pretty straightforward, as you can see. Here, we have defined a stub that listens at http://www.afirststub.com/test and returns a positive response, defined using the success() method, which boils down to Hoverfly returning an HTTP response with a 200 status code. The response further contains a response body with a string Success as its body and text/plain as its content type. By replacing these values with other content and content type values, you can easily create a stub that exerts the behaviour required for your specific testing needs.

As you can see, a Hoverfly stub is defined using the JUnit @ClassRule annotation. For those of you that use TestNG, you can manage the Hoverfly instance (the Hoverfly class is included in the hoverfly-java dependency) in @Before and @After classes instead.

We can check that this stub works as intended by writing and running a simple REST Assured test for it:

@Test
public void testAFirstStub() {
		
	given().
	when().
		get("http://www.afirststub.com/test").
	then().
		assertThat().
		statusCode(200).
	and().
		body(equalTo("Success"));
}

Since Hoverfly works as a proxy, it can return any data you specify, even for existing endpoints. This means that you don’t need to change existing configuration files and endpoints in your system under test when you’re running your tests, no matter whether you’re using an actual endpoint or the Hoverfly stub representation of it. A big advantage, if you ask me.

Consider the following (utterly useless) use case: the endpoint http://ergast.com/api/f1/drivers/max_verstappen.json returns data for the Formula 1 driver Max Verstappen in JSON format (you can click the link to see what data is returned). Assume we want to test what happens when the permanentNumber changes value from 33 to, say, 999, we can simply create a stub that listens at the same endpoint, but returns different data:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
       	service("ergast.com")
       		.get("/api/f1/drivers/max_verstappen.json")
       		.willReturn(success("{\"permanentNumber\": \"999\"}", "application/json"))
));

Note that I removed all other data that is returned by the original endpoint for brevity and laziness. Mostly laziness, actually. Again, a simple test shows that instead of the data returned by the real endpoint, we now get our data from the Hoverfly stub:

@Test
public void testStubFakeVerstappen() {
		
	given().
	when().
		get("http://ergast.com/api/f1/drivers/max_verstappen.json").
	then().
		assertThat().
		body("permanentNumber",equalTo("999"));
}

Apart from being quite useless, the example above also introduces an issue with defining stubs that return larger amounts of JSON data (or XML data, for that matter): since JSON is not really well supported out of the box in Java (nor is XML), we could potentially end up with a large and unwieldy string with lots of character escaping for larger response bodies. Luckily, Hoverfly offers a solution for that in the form of object (de-)serialization.

Assume we have a simple Car POJO with two fields: make and model. If we create an instance of that Car object like this:

private static Car myCar = new Car("Ford", "Focus");

and we pass this to the stub definition as follows:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
	service("www.testwithcarobject.com")
 		.get("/getmycar")
       		.willReturn(success(json(myCar)))
));

then Hoverfly will automatically serialize the Car object instance to JSON, which we can visualize by creating another REST Assured test and having it log the response body to the console:

@Test
public void testStubGetCarObject() {
		
	given().
	when().
		get("http://www.testwithcarobject.com/getmycar").
	then().
		log().
		body().
	and().
		assertThat().
		body("make",equalTo("Ford"));
}

When run, this test generates the following console output, indicating that Hoverfly successfully serialized our Car instance to JSON:

{
    "make": "Ford",
    "model": "Focus"
}

Note that the getters of the POJO need to be named correctly for this to work. For example, the getter for the make field needs to be called getMake(), or else the object will not be serialized.

The final Hoverfly feature that I’d like to demonstrate is the ability to simulate error flows by returning bad requests. This can be done simply as follows:

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(
       	service("www.badrequest.com")
       		.get("/req")
       		.willReturn(badRequest())
));

and can be verified by checking the status code corresponding with a bad request, which is HTTP 400, with a test:

@Test
public void testStubBadRequest() {
		
	given().
	when().
		get("http://www.badrequest.com/req").
	then().
		assertThat().
		statusCode(400);
}

Similar to the Hoverfly product in general, its Java DSL is still under construction. This post was written based on version 0.3.6 and does not reflect newer versions. I had a bit of trouble getting the code to run, initially, but the SpectoLabs team have been very responsive and helpful in resolving the questions I had and the issues I encountered.

As an end note, please be aware that the Java DSL we’ve seen in this post is just one way of using Hoverfly. For a complete overview of the features and possibilities provided by the tool, please take a look at the online documentation.

A Maven project featuring all of the examples and tests in this post can be downloaded here. Tip: you’ll need to set your Java compiler compliance level to 1.8 in order for the code to compile and run correctly.

On elegance

For the last couple of weeks, I’ve been reading up on the works of one of the most famous Dutch computer scientists of all time: Edsger W. Dijkstra. Being from the same country and sharing a last name (we’re not related as far as I know, though), of course I’ve heard of him and the importance of his work to the computer science field, but it wasn’t until I saw a short documentary on his life (it is in Dutch, so probably useless to most of you) that I got interested in his work. Considering the subject of this blog, his views on software testing are of course what interests me the most, but there is more..

Most of you have probably heard what is perhaps his best known quote related to software testing:

Program testing can be used to show the presence of bugs, but never to show their absence.

However, that’s not what I wanted to write about in this blog post. Dijkstra has left his legacy by means of a large number of manuscripts, collectively known as the EWDs (named for the fact that he signed them with his initials). For those of you that have a little spare time left, you can read all transcribed EWDs here.

I was particularly triggered by a quote of Dijkstra that featured in the documentary, and which is included in EWD 1284:

After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure.

While Dijkstra was referring to solutions to computer science-related problems in general, and to software development and programming structures in particular, it struck me that this quote applies quite well to test automation too.

Here’s the definition of ‘elegance’ from the Oxford Dictionary:

  1. The quality of being graceful and stylish in appearance or manner.
  2. The quality of being pleasingly ingenious and simple.

To me, this is what defines a ‘good’ solution to any test automation problem: one that is stylish, ingenious and simple. Too often, I’m seeing solutions proposed (or answers to questions given) that are overly complicated, unstructured, not well-thought through or just plain horrible. And, granted, I’ve been guilty of doing the same myself, and I’m probably still doing so from time to time. But as of now, I know at least one of the characteristics a good solution should adhere to: elegance.

Dijkstra also gives two typical characteristics of elegant solutions:

An elegant solution is shorter than most of its alternatives.

This is why I like tools such as REST Assured and WireMock so much: the code required to define tests c.q. mocks in these tools is short, readable and powerful: elegance at work.

An elegant solution is constructed out of logically divided components, each of which can be easily swapped out for an alternative without influencing the rest of the solution too much.

This is a standard that every test automation solution should be held up to: if any of the components fails, is no longer maintained or is being outperformed by an alternative, how much effort does it require to swap out that component?

As a last nugget of wisdom from Dijkstra, here’s another of his quotes on elegance:

Elegant solutions are often the most efficient.

And isn’t efficiency something we all should strive for when creating automated tests? It is definitely something I’ll try and pay more attention to in the future. Along with effectiveness, that is, something that’s maybe even more important.

However, according to my namesake, I am (and probably most of you are) failing to meet his standards in all of the projects that we’re working on. As he states:

For those who have wondered: I don’t think object-oriented programming is a structuring paradigm that meets my standards of elegance.

Ah well..