Creating virtual assets in service virtualization: record and playback or behaviour modeling?

When you’re implementing service virtualization, there are basically two different options for the creation of your virtual assets. In this post, I would like to take a closer look at both of these approaches and discuss why I think one should be recommended over the other. Spoiler alert: as with so many things, this is never a matter of black or white. There are always situations where the use of one approach can be preferred over using the other. You’ll see as we dive deeper into this subject.

Record and playback
The first approach for creating virtual assets is by first using your service virtualization solution as a proxy to capture request-response pairs (traffic) sent between your application under test and the dependency that is ultimately being virtualized.

Creating virtual assets using record and playback

This approach does have a number of advantages:

  • Using the record and playback approach, you can have a functional virtual asset up and running in a couple of minutes. There is no need for behaviour modeling or time-consuming response creation.
  • This approach suits situations where there is no formal specification of the structure or contents of the traffic that passes between the application under test and the dependency to be virtualized.

However, there are also a number of downsides:

  • This approach can only be used when the dependency is available for traffic recording. Often, access to the dependency is restricted or the dependency simply does not exist at all.
  • The behaviour that will be exerted by the virtual asset is restricted to the request-response pairs that were previously recorded. This puts a severe limit to the flexibility of the virtual asset. A simple example would be the fact that the virtual asset will never be able to generate unique message ID’s when you adopt a pure record and playback strategy – a blocker for a lot of systems.

When looking at these advantages and disadvantages, one can easily see that applying record and playback for service virtualization is much the same as with test automation: it is a good aproach for quickly generating virtual assets, but there are severe limits with regards to flexibility, plus maintenance will most likely be a pain in the a**. Having said that, using record and playback CAN be beneficial, for instance to discover how request and response messages are structured when you’re confronted with a lack of formal message specifications (and yes, that happens more often than you’d think…).

Modeling virtual asset behaviour
As an alternative, virtual asset behaviour can be modeled from the ground up, based on service and/or request and response message specifications. For example, any serious service virtualization solution allows you to generate virtual assets from WSDL or XSD documents (for SOAP-based web services) or WADL, RAML or Swagger specifications (for RESTful web services). To make them more flexible, data sources such as databases or Excel spreadsheets can be used to make the virtual assets data driven.

Creating virtual assets using behaviour modeling

Some advantages of creating virtual assets from scratch are:

  • The resulting virtual assets are generally easier to maintain since their structure is linked to a specification. When the service interface specifications change, updating your virtual asset to reflect the latest interface definition version can often be done with a single click of a button.
  • Since the virtual asset is created using specifications rather than a limited set of request-response pairs, theoretically all incoming requests can be processed and responded to, no matter the data they contain. In practice, there might still be some situations where an incoming request cannot be handled, at least not without some more advanced virtual asset configuration, but the number of requests that CAN be handled is much higher compared to the record and playback approach.

Of course, the behaviour modeling approach too has some disadvantages:

  • It takes longer to create the virtual assets. Where record and playback generates a working asset in minutes, modeling a virtual asset from scratch might take longer, and the time required grows with the number of message types to be supported and the size of and number of elements in the response messages.
  • As said before, sometimes message or web service specifications for the dependency to be virtualized are not readily available, for example when the dependency is still under development. This makes it hard to create virtual assets (although in this case record and playback isn’t an option either).

So, which approach should I choose?
After reading the arguments presented in this blog post, it shouldn’t be too hard to deduce that I am a big fan of creating virtual assets using the behaviour modeling approach. I firmly believe that although the initial setup of a virtual asset takes some additional work compared to record and playback, behaviour modeling results in virtual assets that are more flexible, more well-versed and better maintainable. There are some cases where using the record and playback approach may be beneficial, including vendor demos that show how easy service virtualization really is (beware of those!). In general, though, you should go for building virtual assets from the ground up.

Leveraging previously recorded traffic to create more flexible virtual assets
Again, I’m not completely writing off the use of record and playback, espeically since some interesting recent developments open up options to leverage virtual assets created from previously recorded traffic. Perhaps the most interesting option was shown to me recently by Hasan Ferit Enişer, an M.Sc. student from the Computer Engineering department at Boğaziçi University. I have been in touch with him on and off for the last year or so.

He is doing some interesting work that touches on service virtualization and he’s looking to apply some of the specification mining theories proposed in this paper to prerecorded traffic and virtual assets. This research is still in an early phase, so there’s no telling what the end result will look like and how applicable it will be to industry challenges, but it’s an interesting development nonetheless and one that I’ll keep following closely. Hopefully I’ll be able to write a follow-up post with some interesting results in the not too distant future.

Why service virtualization is like a wind tunnel

This week I attended the second edition of the Continuous Delivery Conference here in the Netherlands. It was a very interesting day with some good keynotes and presentations, plus I had a lot of interesting discussions with both old and new acquaintances, something that really adds value to a conference for me as an independent consultant. But that’s not what this post is about.. Rather it’s about something that struck me when listening to one of the presentations. I’m not sure who was talking as I made notes only much later, but if it was you, please reveal yourself and come get your credits! The presenter briefly discussed service virtualization as a tool to use in the continuous delivery pipeline and compared it to using a wind tunnel for investigating the aeodynamic properties of cars or airplanes. This struck me as a very solid analogy, so much that I would like to share it with you here.

Testing in a wind tunnel

So why do I think service virtualization IS like wind tunnel testing, exactly?

It allows for executing tests in a controlled environment
Testing car or airplane aerodynamics only yields valuable results if you know exactly what conditions applied and which input parameters (wind speed, angle, variation, etc.) were used to obtain the results. Similarly, when you’re testing any distributed system with external dependencies, you can only safely rely on the test results when you know exactly how those dependencies are behaving. With modern-day highly distributed applications – this especially applies to applications built using microservice architectures – not all dependencies are necessarily under your control anymore. If you want to have full control over the behaviour of those dependencies for stable and reliable testing, service virtualization is an excellent option.

Tests can be repeated under the exact same circumstances
Wind tunnels enable test teams to rerun specific tests over and over again, using the same conditions every time. This allows them to exactly determine the effect of any change on the aerodynamics of their car or plane under test. In software testing, this is exactly what you want when you need to reproduce or analyze a defect or any other suspect behaviour. Unfortunately, when dependencies are outside your circle of control, this might not be easy, if possible at all. When you’re using virtual assets instead of external dependencies, it’s far easier to recreate the exact conditions that applied when the defect occurred.

It can be used to test for highly improbable situations
150mph wind gusts, wind coming from three directions at the same time, … Situations that might be really hard – if not impossible – to reproduce when road testing, but made possible by using a wind tunnel. It’s those corner cases where interesting behaviour of your test object might surface, so they are really worth looking at. With service virtualization, it’s possible, for example, to prepare highly improbable responses for a virtualized third party dependency and see how your application handles these. This is a great way to improve the resilience of and trust in the application you’re developing and testing.

There’s always a need for real life road testing
As with testing cars and planes, you can go a long way using simulated test environments, but there’s no place like the road to really see how your software holds up. So never trust on virtualization alone when testing any application that uses dependencies, because there’s always a situation or two you didn’t think of when virtualizing.. Instead, use your software wind tunnel wisely and your testing process will see the benefits.

Creating mock RESTful APIs using Sandbox

While browsing through my Twitter feed a couple of days ago I saw someone mentioning Sandbox, a Software as a Service (SaaS) solution for quick creation and deployment of mock services for development and testing purposes. After starting to play around with it a bit, I was rather impressed by the ease with which one can create useful mocks from Swagger, RAML and WSDL API specifications.

As an example, I created a RAML API model for a simple API that shows information about test tools. Consumers of this API can create, read, update and delete entries in the list. The API contains six different operations:

  • Add a test tool to the list
  • Delete a test tool from the list
  • Get information about a specific test tool from the list
  • Update information for a specific test tool
  • Retrieve the complete list of test tools
  • Delete the complete list of test tools

You can view the complete RAML specification for this API here.

Creating a skeleton for the mock API in Sandbox is as easy as registering and then loading the RAML specification into Sandbox:
Loading the RAML specification in SandboxSandbox then generates a fully functioning API skeleton based on the RAML:
API operations from the RAMLSandbox also creates a routing routine for every operation:

var testtools = require("./routes/testtools.js")

/* Route definition styles:
 *
 *	define(path, method, function)
 *	soap(path, soapAction, function)
 *
 */
Sandbox.define("/testtools", "POST", testtools.postTesttools);
Sandbox.define("/testtools", "GET", testtools.getTesttools);
Sandbox.define("/testtools", "DELETE", testtools.deleteTesttools);
Sandbox.define("/testtools/{toolname}", "GET", testtools.getTesttools2);
Sandbox.define("/testtools/{toolname}", "PUT", testtools.putTesttools);
Sandbox.define("/testtools/{toolname}", "DELETE", testtools.deleteTesttools2);

and empty responses for every operation (these are the responses for the first two operations):

/*
 * POST /testtools
 *
 * Parameters (body params accessible on req.body for JSON, req.xmlDoc for XML):
 *
 */
exports.postTesttools = function(req, res) {
	res.status(200);

	// set response body and send
	res.send('');
};

/*
 * GET /testtools
 *
 * Parameters (named path params accessible on req.params and query params on req.query):
 *
 * q(type: string) - query parameter - Search phrase to look for test tools
 */
exports.getTesttools = function(req, res) {
	res.status(200);

	// set response body and send
	res.type('json');
	res.render('testtools_getTesttools');
};

Sandbox also creates template responses (these are rendered using res.render(‘testtools_getTesttools’) in the example above). These are mostly useful when dealing with either very large JSON responses or with XML responses, though, and as our example API has neither, we won’t use them here.

To show that the generated API skeleton is fully working, we can simply send a GET to the mock URL and verify that we get a response:
Testing the generated API skeletonNow that we’ve seen our API in action, it’s time to implement the operations to have the mock return more meaningful responses. We also want to add some state to be able to store new entries to our test tool list for later reference. For example, to add a test tool submitted using a POST to our list – after verifying that all parameters have been assigned a value – we use the following implemenation for the export.postTesttools method:

/*
 * POST /testtools
 *
 */
exports.postTesttools = function(req, res) {
    
    if (req.body.name === undefined) {
        return res.json(400, { status: "error", details: "missing tool name" });
    }
    
    if (req.body.description === undefined) {
        return res.json(400, { status: "error", details: "missing tool description" });
    }
    
    if (req.body.url === undefined) {
        return res.json(400, { status: "error", details: "missing tool website" });
    }
    
    if (req.body.opensource === undefined) {
        return res.json(400, { status: "error", details: "missing tool opensource indicator" });
    }

    // add tool to list of tools
    state.tools.push(req.body);
    
    return res.json(200, { status: "ok", details: req.body.name + " successfully added to list" });
};

Likewise, I’ve added meaningful implementations for all other methods. The complete code for our mock API implementation can be found here.

Finally, to prove that the mock API works as desired, I wrote a couple of quick tests using REST Assured. Here’s a couple of them:

// base URL for our Sandbox testtools API
static String baseUrl = "http://testtools.getsandbox.com/testtools";
	
// this is added to the URL to perform actions on a specific item in the list
static String testtoolParameter = "/awesometool";
	
// original JSON body for adding a test tool to the list 
static String testtoolJson = "{\"name\":\"awesometool\",\"description\":\"This is an awesome test tool.\",\"url\":\"http://awesometool.com\",\"opensource\":\"true\"}";
	
// JSON body used to update an existing test tool
static String updatedTesttoolJson = "{\"name\":\"awesometool\",\"description\":\"This is an awesome test tool.\",\"url\":\"http://awesometool.com\",\"opensource\":\"false\"}";

@BeforeMethod
public void clearList() {
		
	// clear the list of test tools
	delete(baseUrl);
		
	// add an initial test tool to the list
	given().
		contentType("application/json").
		body(testtoolJson).
	when().			
		post(baseUrl).
	then();
}

@Test
public static void testGetAll() {
		
	// verify that a test tool is in the list of test tools
	given().
	when().
		get(baseUrl).
	then().
		body("name", hasItem("awesometool"));
}

@Test
public static void testDeleteAll() {
		
	// verify that the list is empty after a HTTP DELETE		
	given().
	when().
		delete(baseUrl).
	then().
		statusCode(200);
	
	given().
	when().
		get(baseUrl).
	then().
		body(equalTo("[]"));
}

An Eclipse project containing all of the tests I’ve written (there really aren’t that many, by the way, but I can think of a lot more) can be downloaded here.

One final note: Sandbox is a commercial SaaS solution, and therefore requires you to fork over some money if you want to use it in a serious manner. For demonstration and learning purposes, however, their free plan works fine.

Overall, I’ve found Sandbox to be a great platform for rapidly creating useful mock API implementation, especially when you want to simulate RESTful services. I’m not sure whether it works as well when you’re working with XML services, because it seems a little more cmplicated to construct meaningful responses without creating just a bunch of prepared semi-fixed responses. Having said that, I’m pretty impressed with what Sandbox does and I’ll surely play around with it some more in the future.