Should virtual assets contain business logic?

A while ago I read an interesting discussion in the LinkedIn Service Virtualization group that centered around the question whether or not to include business logic in virtual assets when implementing service virtualization. The general consensus was a clear NO, you should never include business logic in your virtual assets. The reason for this is that it makes them unnecessarily complex and increases maintenance efforts whenever the implementation (or even just the interface) of the service being simulated changes. In general, it is considered better to make your virtual assets purely data driven, and to keep full control over the data that is sent to the virtual asset, so that you can easily manage the data that is to be returned by it.

Data driven virtual assets

In general, I agree with the opinions voiced there. It is considered good practice to keep virtual assets as simple as possible, and an important way to achieve this is by adopting the aforementioned data driven approach. However, in a recent project where I was working on a service virtualization implementation I had no other choice than to implement a reasonable amount of business logic into some of the virtual assets.

Why? Let’s have a look.

The organization that implemented service virtualization was a central administrative body for higher education in the Netherlands, let’s call them C (for Client). Students can use C’s core application to register for university and college programs, and all information exchange between educational institutions and government institutions responsible for education went through this application as well. So, on one side we have the educational institutions, on the other side we have the government (so to speak). The virtual assets implemented simulated the interface between C and the government. The test environment for the government interface was used by the C test team, but they also provided their own test environment to the testers at the universities and colleges who wanted to do end-to-end tests:

The test situation

As there were a lot of different universities (around 20) using the C test environment for their end-to-end tests, coordinating and centralizing the test data they were using was a logistical nightmare. They each received a number of test students they could use for their tests, but apart from that, C had no control over the amount and nature of tests that were performed through their test environment. As a result, there was no way to determine the amount and state of test data sent to and from the virtual asset at any given point in time.

Moreover, the virtual asset needed to remember (part of) the test data that it received. These data were required in the calculation of the response to future requests. For example, one of the operations supported by the virtual asset was determining whether a student was eligible for a certain type of financial support. This depended on the number and type of education he or she had previously enrolled in and finished.

To enable the virtual asset to remember received data (and essentially become stateful), we hooked up a simple database to the virtual asset and stored the data entity updates we received, such as student enrollments, students finishing an education, etc., in that database. These data were then used to determine any response values that depended on both data and business logic. Simple data retrieval requests were of course implemented in data driven manner.

This deliberate choice to implement a certain – but definitely limited – amount of business logic in the virtual asset enabled not only company C, but also the organizations that depended on C’s test environments, to successfully perform realistic end-to-end test scenarios.

But…

Once the people at company C saw the power of service virtualization, they kept filing requests for increasingly complex business logic to be implemented in the virtual asset. As any good service virtualization consultant should do, I was very cautious in implementing these features. My main argument was that Icould implement these features rather easily, but someone eventually would have to maintain them, at which point I would be long gone. In the end, we found an agreeable mix of realistic behaviour (with the added complexity) and maintainability. They were happy because they had a virtual asset they could use and provide to their partners, I was happy because I delivered a successful and maintainable service virtualization implementation.

So, yes, in most cases you should make sure your virtual assets are purely data driven when you’re implementing service virtualization. However, in some situations, implementing a bit of business logic might just make the difference between an average and a great implementation. Just remember to keep it to the bare minimum.

A couple of presentations on service virtualization

Hey all!

First of all, apologies for the lack of recent postings on this blog. I have been quite busy with my day job and haven’t had time to write new posts in a while. I definitely intend to get working on some new stuff soon enough, so please do keep posted.

It’s been great to see that even though I haven’t added any new posts in a while, questions on existing posts are still coming in quite regularly. This is a great motivator for me to keep working on this site, and yet another reason to get my lazy behind into gear and start working on new material.

One of the reasons I’ve been quite busy is the fact that I’ve been doing a couple of presentations at conferences here in the Netherlands, and I thought it would be nice to share these with you as well.

The first presentation I held in November at the Dutch Testing Conference. The presentation shows the case study for a project I’ve been working on for the last 8 months or so. In this project, we have successfully introduced service virtualization as a means to get rid of some major blockers in our test environment. Using virtualized services that emulate the behaviour of the dependencies that were causing trouble, we have been able to speed up the development and testing process significantly. Introducing SV has also been an enabler for test automation as well – as in: we couldn’t do test automation without these virtualized services.

I gave the second presentation at the first Continuous Delivery Conference, also here in the Netherlands. As you can guess from the name, this conference was more about Continuous Delivery, rather than just about testing. However, as our case study showed some pretty significant improvements in the CD area as well, we decided to present it there as well. I did this talk at the request of Parasoft.

Both conferences were great to attend, and especially the CD conference gave me a lot of inspiration for future work and areas to explore. It really showed that testing isn’t just an activity in itself anymore (if it ever has been), but it has become an integral part of a much larger story, that of continuously delivering high quality software at ever increasing pace. Very interesting to hear and see the perspectives of some of the inspirators in the CD field on this topic..

An introduction to service virtualization

One of the concepts that is rapidly gaining popularity within the testing world – and in IT in general – is service virtualization (SV). This post provides a quick introduction to the SV concept and the way it leverages automated and manual testing efforts and thereby software development in general.

What is service virtualization?
SV is the concept of simulating the behaviour of an application or resource that is required in order to perform certain types of tests, in case this resource is not readily available or the availability or use of this resource is too expensive. In other words, SV is aimed at the removal of traditional dependencies in software development when it comes to the availability of systems and environments. In this way, SV is complementary to other forms of virtualization, such as hardware (VPS) or operating system (VMware and similar solutions) virtualization.

Behaviour simulation is carried out using virtual assets, which are pieces of software that mimic application behaviour in some way. Although SV started out with the simulation of web service behaviour, modern SV tools can simulate all kinds of communication that is performed over common message protocols. In this way, SV can be used to simulate database transactions, mainframe interaction, etc.

http://www.w3.org/WAI/intro/people-use-web/Overview.html

From a ‘live’ test environment to a virtual test environment

What are the benefits of using service virtualization?
As mentioned in the first paragraph of this post, SV can significantly speed up the development process in case required resources:

  • are not available during (part of) the test cycle, thereby delaying tests or negatively influencing test coverage;
  • are too expensive to keep alive (e.g., test environments need to be maintained or rented continuously even though access is required only a couple of times per year);
  • cannot readily emulate the behaviour required for certain types of test cases;
  • are shared throughout different development teams, negatively influencing resource availability.

Service virtualization tools
Currently, four commercial service virtualization tools are available on the market:

Furthermore, several open source service virtualization projects have emerged in recent years, such as WireMock and Betamax. These offer significantly less features, obviously, but they just might fit your project requirements nonetheless, making them worthy of an evaluation.

Personally, I have extensive experience using Parasoft Virtualize and have been able to successfully implement it in a number of projects for our customers. Results have been excellent, as can be seen in the following case study.

A case study
The central order management application at one of our customers relied heavily on an external resource for the successful provisioning of orders. This external resource requires manual configuration for each order that is created, resulting in the test team having to file requests for configuration changes for each test case. The delay for this configuration could be as much as a week, resulting in severe delays in testing and possible test coverage (as testers could only create a small amount of orders per test cycle).

Using service virtualization to simulate the behaviour of this external resource, this dependency has been removed altogether. The virtual asset implementing the external resource behaviour processes new orders in a matter of minutes, as opposed to weeks in case the ‘real’ external resource is required. Using SV, testers can provision orders much faster and are able to achieve much higher test coverage as a result. This has resulted in a significant increase in software quality. Also, SV has been a key factor in the switch to an Agile development process. Without SV, the short development and testing cycles associated with the Agile software development method would not have been possible.

Service virtualization on Wikipedia