Service virtualization implementation roadblocks

In previous posts I have written about some of the merits of service virtualization (SV) as a means of emulating the behaviour of non-existent or hard or expensive to access software components in a distributed environment. In this post, I want to take another viewpoint and discuss some of the roadblocks you may encounter when implementing SV – and how to handle them.

What if there’s no traffic to capture?
In case the service or component to be modeled does not yet exist or is otherwise unavailable, there’s no way to monitor and capture any traffic between your system under test and the ‘live’ dependency. This means you need to create the virtual asset and its behaviour more or less from scratch. Of course, modern SV tools will do some of the work for you, for example by creating a skeleton for the virtual asset from a WSDL or RAML specification, but there’s a bit (or even a lot) of work left to do if you’re dealing with anything but the most trivial of dependencies. This means more time and therefore more money is needed to create a suitable virtual dependency.

And please don’t assume that there’s a complete specification of the component to be virtualized available for you to work from. Granted, a WSDL or RAML specification or anything similar can be obtained fairly often beforehand, but those express only a small portion of the required behaviour. They don’t state how specific combinations of request data or specific sequences of interactions are handled, for example. It’s therefore vital to discuss the required behaviour for the virtual asset with stakeholders early and often. Find out what the virtual asset needs to do (and what not) before starting to model it.

Don’t rely too much on capture/playback of virtual asset behaviour
On the other hand, if the component to be virtualized IS available for capture/playback of traffic, please don’t rely too much on the capture/playback function of your SV tool of choice. Although capture/playback is a wonderful way to quickly create a virtual asset that can handle a couple of fixed request/response pairs, I think that there’s no way that an SV approach that relies solely on capture/playback is sustainable in the long run. To create assets that are usable, flexible and maintainable, you’ll always need to do some adjustments and additional modeling, likely to an extent where it’s better to start modeling your asset from the ground up rather than working with prerecorded traffic. In this way, it’s surprisingly much like test automation!

Having said that, the capture/playback approach certainly has its merits, most importantly when analyzing request/response traffic for components you’re not completely familiar with, or for which no complete specifications and/or subject matter experts are available to clarify any holes in the specs.

How to handle custom or proprietary protocols and message formats
In a perfect world – at least from an integration and compatibility point of view – all message exchanges between dependent systems are executed using common, open and standardized protocols (such as REST or SOAP) and using standardized message formats (such as JSON or XML). Unfortunately, in practice, there are a lot of situations where less common or even proprietary protocols and message formats are used. Even though modern SV tools support many of these out of the box, they don’t cover the full spectrum, meaning that you will at some point in time encounter a situation where you’ll need to virtualize the behaviour for an application that uses some exotic message protocol or format (or both, if you’re really, really lucky).

In that case, you basically have two options:

  • Build a custom protocol listener and/or message handler to process and respond to incoming messages
  • Forget about using SV for virtualizing this dependency altogether

While it might seem that I’ve put the second option there in jest, it should actually be considered a serious option. Implementing SV is always a matter of balancing costs and benefits, and the effort needed to implement custom protocol listeners and message handlers might just not be outweighed by the benefits..

One possible solution to this problem might be to use Opaque Data Processing or ODP, a technique that matches requests based on byte-level patterns and provides accurate responses based on a transaction library. Currently, this technique is only available for CA Service Virtualization users, though. Also, I’m personally not too sure whether ODP will solve all of your problems, especially when it comes to parameterization and flexibility (see also my previous point on relying too much on capture/playback), but it’s definitely worth a try if you’re a (prospective) CA SV user.

Some concluding considerations
Wrapping up, I’d like to offer the following suggestions for anyone encountering any of the roadblocks on his or her SV implementation journey:

  • Always go with modeling instead of capture/playback, as this allows you to create a more flexible and better maintainable virtual asset
  • Use capture/playback for analysis of message / interactions flows. This can give you useful insights in the way information is exchanged between your system under test and its dependencies
  • Discuss required behaviour early and often to avoid unnecessary (re-)work
  • Avoid custom protocol listeners and message handlers wherever possible, or at the very least do a thorough cost/benefit analysis

Happy virtualizing!

My article on service virtualization has been published on StickyMinds

On August 10th StickyMinds, a popular online community for software development professionals, published an article I wrote titled ‘Four ways to boost your testing process with service virtualization’. In this article, I describe a case study from a project I worked on recently, where service virtualization was used to significantly improve the testing process.

The article demonstrates (as you can probably guess from the title) how you too can employ service virtualization to remove some of the bottlenecks associated with testing and test data and test environment management for distributed systems and development processes.

You can read the article by visiting the StickyMinds homepage now. The article will be right there on the front page.

StickyMinds

Please let me know what you think in the comments! Also, feel free to share the article with your connections on social media.

Again, the article can be read here.

Stubs, mocks or virtual assets?

If, during the software development process, you stumble upon the need to access software components that:

  • Have not yet been developed,
  • Do not contain sufficient test data,
  • Require access fees, or
  • Are otherwise constrained with regards to accessibility,

there are several options you may consider to work around this issue. In this post, I will introduce three options and explain some of the most important of their characteristics.

Please note that the terms ‘stub’ and ‘mock’ are often mixed up in practice, so what is defined as a mock here might be called a stub somewhere else and vice versa. However, I tried to use definitions that are more or less agreed upon in the development and testing community.

Stubs
The simplest form of removing dependency constraints is the use of stubs. A stub is a very simple placeholder that does pretty much nothing besides replacing another component. It provides no intelligence, no data driven functionality and no validations. It can be created quickly and is most commonly used by developers to mimick behaviour of objects and components not available in their development environment.

Mocks
Mocks contain a little more intelligence compared to stubs. They are commonly configured to be used for specific test or development purposes. They are used to define and verify expectations with regards to behaviour. For example, a mock service might be configured to always return certain test data in response to a request recevied, so that specific test cases can be executed by testers. The difference between mocks and stubs from a testing perspective can be summarized by the fact that a mock can cause a test case to fail, whereas a stub can’t.

Virtual assets
Virtual assets are simulated components that closely mimic the behaviour of ‘the real thing’. They can take a wide variety of inputs and return responses that their real-life counterpart would return too. They come with data driven capabilities that allow responses (and therefore behaviour) to be configured on the fly, even by people without programming knowledge. Virtual assets should also replicate connectivity to the simulated component by applying the same protocols (JMS, HTTP, etc.) and security configuration (certificates, etc.). The application of virtual assets in test environments is commonly called service virtualization.

Testing terms related to stubs and mocks

If you want to read more about component or API stubbing, mocking or virtualization, this page in the SmartBear API Testing Dojo provides an interesting read. Also, Martin Fowler wrote a great piece on mocks and stubs on his blog back in 2007.