Why service virtualization is like a wind tunnel

This week I attended the second edition of the Continuous Delivery Conference here in the Netherlands. It was a very interesting day with some good keynotes and presentations, plus I had a lot of interesting discussions with both old and new acquaintances, something that really adds value to a conference for me as an independent consultant. But that’s not what this post is about.. Rather it’s about something that struck me when listening to one of the presentations. I’m not sure who was talking as I made notes only much later, but if it was you, please reveal yourself and come get your credits! The presenter briefly discussed service virtualization as a tool to use in the continuous delivery pipeline and compared it to using a wind tunnel for investigating the aeodynamic properties of cars or airplanes. This struck me as a very solid analogy, so much that I would like to share it with you here.

Testing in a wind tunnel

So why do I think service virtualization IS like wind tunnel testing, exactly?

It allows for executing tests in a controlled environment
Testing car or airplane aerodynamics only yields valuable results if you know exactly what conditions applied and which input parameters (wind speed, angle, variation, etc.) were used to obtain the results. Similarly, when you’re testing any distributed system with external dependencies, you can only safely rely on the test results when you know exactly how those dependencies are behaving. With modern-day highly distributed applications – this especially applies to applications built using microservice architectures – not all dependencies are necessarily under your control anymore. If you want to have full control over the behaviour of those dependencies for stable and reliable testing, service virtualization is an excellent option.

Tests can be repeated under the exact same circumstances
Wind tunnels enable test teams to rerun specific tests over and over again, using the same conditions every time. This allows them to exactly determine the effect of any change on the aerodynamics of their car or plane under test. In software testing, this is exactly what you want when you need to reproduce or analyze a defect or any other suspect behaviour. Unfortunately, when dependencies are outside your circle of control, this might not be easy, if possible at all. When you’re using virtual assets instead of external dependencies, it’s far easier to recreate the exact conditions that applied when the defect occurred.

It can be used to test for highly improbable situations
150mph wind gusts, wind coming from three directions at the same time, … Situations that might be really hard – if not impossible – to reproduce when road testing, but made possible by using a wind tunnel. It’s those corner cases where interesting behaviour of your test object might surface, so they are really worth looking at. With service virtualization, it’s possible, for example, to prepare highly improbable responses for a virtualized third party dependency and see how your application handles these. This is a great way to improve the resilience of and trust in the application you’re developing and testing.

There’s always a need for real life road testing
As with testing cars and planes, you can go a long way using simulated test environments, but there’s no place like the road to really see how your software holds up. So never trust on virtualization alone when testing any application that uses dependencies, because there’s always a situation or two you didn’t think of when virtualizing.. Instead, use your software wind tunnel wisely and your testing process will see the benefits.

Creating mock RESTful APIs using Sandbox

While browsing through my Twitter feed a couple of days ago I saw someone mentioning Sandbox, a Software as a Service (SaaS) solution for quick creation and deployment of mock services for development and testing purposes. After starting to play around with it a bit, I was rather impressed by the ease with which one can create useful mocks from Swagger, RAML and WSDL API specifications.

As an example, I created a RAML API model for a simple API that shows information about test tools. Consumers of this API can create, read, update and delete entries in the list. The API contains six different operations:

  • Add a test tool to the list
  • Delete a test tool from the list
  • Get information about a specific test tool from the list
  • Update information for a specific test tool
  • Retrieve the complete list of test tools
  • Delete the complete list of test tools

You can view the complete RAML specification for this API here.

Creating a skeleton for the mock API in Sandbox is as easy as registering and then loading the RAML specification into Sandbox:
Loading the RAML specification in SandboxSandbox then generates a fully functioning API skeleton based on the RAML:
API operations from the RAMLSandbox also creates a routing routine for every operation:

var testtools = require("./routes/testtools.js")

/* Route definition styles:
 *
 *	define(path, method, function)
 *	soap(path, soapAction, function)
 *
 */
Sandbox.define("/testtools", "POST", testtools.postTesttools);
Sandbox.define("/testtools", "GET", testtools.getTesttools);
Sandbox.define("/testtools", "DELETE", testtools.deleteTesttools);
Sandbox.define("/testtools/{toolname}", "GET", testtools.getTesttools2);
Sandbox.define("/testtools/{toolname}", "PUT", testtools.putTesttools);
Sandbox.define("/testtools/{toolname}", "DELETE", testtools.deleteTesttools2);

and empty responses for every operation (these are the responses for the first two operations):

/*
 * POST /testtools
 *
 * Parameters (body params accessible on req.body for JSON, req.xmlDoc for XML):
 *
 */
exports.postTesttools = function(req, res) {
	res.status(200);

	// set response body and send
	res.send('');
};

/*
 * GET /testtools
 *
 * Parameters (named path params accessible on req.params and query params on req.query):
 *
 * q(type: string) - query parameter - Search phrase to look for test tools
 */
exports.getTesttools = function(req, res) {
	res.status(200);

	// set response body and send
	res.type('json');
	res.render('testtools_getTesttools');
};

Sandbox also creates template responses (these are rendered using res.render(‘testtools_getTesttools’) in the example above). These are mostly useful when dealing with either very large JSON responses or with XML responses, though, and as our example API has neither, we won’t use them here.

To show that the generated API skeleton is fully working, we can simply send a GET to the mock URL and verify that we get a response:
Testing the generated API skeletonNow that we’ve seen our API in action, it’s time to implement the operations to have the mock return more meaningful responses. We also want to add some state to be able to store new entries to our test tool list for later reference. For example, to add a test tool submitted using a POST to our list – after verifying that all parameters have been assigned a value – we use the following implemenation for the export.postTesttools method:

/*
 * POST /testtools
 *
 */
exports.postTesttools = function(req, res) {
    
    if (req.body.name === undefined) {
        return res.json(400, { status: "error", details: "missing tool name" });
    }
    
    if (req.body.description === undefined) {
        return res.json(400, { status: "error", details: "missing tool description" });
    }
    
    if (req.body.url === undefined) {
        return res.json(400, { status: "error", details: "missing tool website" });
    }
    
    if (req.body.opensource === undefined) {
        return res.json(400, { status: "error", details: "missing tool opensource indicator" });
    }

    // add tool to list of tools
    state.tools.push(req.body);
    
    return res.json(200, { status: "ok", details: req.body.name + " successfully added to list" });
};

Likewise, I’ve added meaningful implementations for all other methods. The complete code for our mock API implementation can be found here.

Finally, to prove that the mock API works as desired, I wrote a couple of quick tests using REST Assured. Here’s a couple of them:

// base URL for our Sandbox testtools API
static String baseUrl = "http://testtools.getsandbox.com/testtools";
	
// this is added to the URL to perform actions on a specific item in the list
static String testtoolParameter = "/awesometool";
	
// original JSON body for adding a test tool to the list 
static String testtoolJson = "{\"name\":\"awesometool\",\"description\":\"This is an awesome test tool.\",\"url\":\"http://awesometool.com\",\"opensource\":\"true\"}";
	
// JSON body used to update an existing test tool
static String updatedTesttoolJson = "{\"name\":\"awesometool\",\"description\":\"This is an awesome test tool.\",\"url\":\"http://awesometool.com\",\"opensource\":\"false\"}";

@BeforeMethod
public void clearList() {
		
	// clear the list of test tools
	delete(baseUrl);
		
	// add an initial test tool to the list
	given().
		contentType("application/json").
		body(testtoolJson).
	when().			
		post(baseUrl).
	then();
}

@Test
public static void testGetAll() {
		
	// verify that a test tool is in the list of test tools
	given().
	when().
		get(baseUrl).
	then().
		body("name", hasItem("awesometool"));
}

@Test
public static void testDeleteAll() {
		
	// verify that the list is empty after a HTTP DELETE		
	given().
	when().
		delete(baseUrl).
	then().
		statusCode(200);
	
	given().
	when().
		get(baseUrl).
	then().
		body(equalTo("[]"));
}

An Eclipse project containing all of the tests I’ve written (there really aren’t that many, by the way, but I can think of a lot more) can be downloaded here.

One final note: Sandbox is a commercial SaaS solution, and therefore requires you to fork over some money if you want to use it in a serious manner. For demonstration and learning purposes, however, their free plan works fine.

Overall, I’ve found Sandbox to be a great platform for rapidly creating useful mock API implementation, especially when you want to simulate RESTful services. I’m not sure whether it works as well when you’re working with XML services, because it seems a little more cmplicated to construct meaningful responses without creating just a bunch of prepared semi-fixed responses. Having said that, I’m pretty impressed with what Sandbox does and I’ll surely play around with it some more in the future.

Service virtualization implementation roadblocks

In previous posts I have written about some of the merits of service virtualization (SV) as a means of emulating the behaviour of non-existent or hard or expensive to access software components in a distributed environment. In this post, I want to take another viewpoint and discuss some of the roadblocks you may encounter when implementing SV – and how to handle them.

What if there’s no traffic to capture?
In case the service or component to be modeled does not yet exist or is otherwise unavailable, there’s no way to monitor and capture any traffic between your system under test and the ‘live’ dependency. This means you need to create the virtual asset and its behaviour more or less from scratch. Of course, modern SV tools will do some of the work for you, for example by creating a skeleton for the virtual asset from a WSDL or RAML specification, but there’s a bit (or even a lot) of work left to do if you’re dealing with anything but the most trivial of dependencies. This means more time and therefore more money is needed to create a suitable virtual dependency.

And please don’t assume that there’s a complete specification of the component to be virtualized available for you to work from. Granted, a WSDL or RAML specification or anything similar can be obtained fairly often beforehand, but those express only a small portion of the required behaviour. They don’t state how specific combinations of request data or specific sequences of interactions are handled, for example. It’s therefore vital to discuss the required behaviour for the virtual asset with stakeholders early and often. Find out what the virtual asset needs to do (and what not) before starting to model it.

Don’t rely too much on capture/playback of virtual asset behaviour
On the other hand, if the component to be virtualized IS available for capture/playback of traffic, please don’t rely too much on the capture/playback function of your SV tool of choice. Although capture/playback is a wonderful way to quickly create a virtual asset that can handle a couple of fixed request/response pairs, I think that there’s no way that an SV approach that relies solely on capture/playback is sustainable in the long run. To create assets that are usable, flexible and maintainable, you’ll always need to do some adjustments and additional modeling, likely to an extent where it’s better to start modeling your asset from the ground up rather than working with prerecorded traffic. In this way, it’s surprisingly much like test automation!

Having said that, the capture/playback approach certainly has its merits, most importantly when analyzing request/response traffic for components you’re not completely familiar with, or for which no complete specifications and/or subject matter experts are available to clarify any holes in the specs.

How to handle custom or proprietary protocols and message formats
In a perfect world – at least from an integration and compatibility point of view – all message exchanges between dependent systems are executed using common, open and standardized protocols (such as REST or SOAP) and using standardized message formats (such as JSON or XML). Unfortunately, in practice, there are a lot of situations where less common or even proprietary protocols and message formats are used. Even though modern SV tools support many of these out of the box, they don’t cover the full spectrum, meaning that you will at some point in time encounter a situation where you’ll need to virtualize the behaviour for an application that uses some exotic message protocol or format (or both, if you’re really, really lucky).

In that case, you basically have two options:

  • Build a custom protocol listener and/or message handler to process and respond to incoming messages
  • Forget about using SV for virtualizing this dependency altogether

While it might seem that I’ve put the second option there in jest, it should actually be considered a serious option. Implementing SV is always a matter of balancing costs and benefits, and the effort needed to implement custom protocol listeners and message handlers might just not be outweighed by the benefits..

One possible solution to this problem might be to use Opaque Data Processing or ODP, a technique that matches requests based on byte-level patterns and provides accurate responses based on a transaction library. Currently, this technique is only available for CA Service Virtualization users, though. Also, I’m personally not too sure whether ODP will solve all of your problems, especially when it comes to parameterization and flexibility (see also my previous point on relying too much on capture/playback), but it’s definitely worth a try if you’re a (prospective) CA SV user.

Some concluding considerations
Wrapping up, I’d like to offer the following suggestions for anyone encountering any of the roadblocks on his or her SV implementation journey:

  • Always go with modeling instead of capture/playback, as this allows you to create a more flexible and better maintainable virtual asset
  • Use capture/playback for analysis of message / interactions flows. This can give you useful insights in the way information is exchanged between your system under test and its dependencies
  • Discuss required behaviour early and often to avoid unnecessary (re-)work
  • Avoid custom protocol listeners and message handlers wherever possible, or at the very least do a thorough cost/benefit analysis

Happy virtualizing!