Service virtualization: open source or commercial tooling?

I’ve been talking regularly about the benefits of introducing service virtualization to your software development and testing process. However, once you’ve decided that service virtualization is the way forward, you’re not quite there yet. One of the decisions that still needs to be made is the selection of a tool or tool suite.

A note: Following the ‘why?’-‘what?’-‘how?’ strategy of implementing new tools, tool selection (which is part of the ‘how?’) should only be done when it is clear why implementing service virtualization might be beneficial and what services are to be virtualized. A tool is a means of getting something done, nothing more..

As with many different types of tools, one of the choices to be made in the tool selection process is the question of purchasing a commercial tool or taking the open source route. In this post, I’d like to highlight some of the benefits and drawbacks of either option.

Commercial off the shelf (COTS) solutions
Many major tool vendors, including Parasoft, HP and CA, offer a service virtualization platform as part of their product portfolio. These tools are sophisticated solutions that are generally rich in features and possibilities. If you’d ask me, the most important benefits of going the commercial route would be:

  • Their support for a multitude of communication protocols and message standards. When you’re dealing with a lot of different types of messages, or with proprietary or uncommon message standards or communication protocols, commercial solutions offer a much higher chance of supporting everything you need to virtualize these out-of-the-box. This potentially saves a lot of time developing and maintaining a custom solution.
  • Support for different types of testing. Most commercial tools, next to their ability to simulate request/response behaviour, offer options to simulate behavior that is necessary to perform other types of tests. These options include random message dropping (to test resilience and error handling of your appliation under test) and setting performance characteristics (when virtual assets are to be included in performance testing).
  • Seamless integration with test tools. Commercial vendors often offer API and other functional testing tools that can be combined with their corresponding service virtualization solution to create effective and flexible testing solutions.
  • Extensive product support. When you have questions with regards to using these tools, support is often just a phone call away. Conditions are often formalized in the form of support and maintenance contracts.

As with everything, there are some drawbacks too when selecting a COTS service virtualization solution:

  • Cost. These solutions tend to be very expensive, both in license fees as well as consultancy fees required for successful implementation. This is the single biggest drawback for organizations to invest in service virtualization, as heard from friends that sell and implement these tools for a living.
  • Possible vendor lock-in. A lot of organizations prefer not to work with big tool vendors because it’s hard to migrate away if over time you decided to discontinue their tools. I have no first hand experience with this drawback, however.

Open source solutions
As an alternative to the aforementioned commercially licensed service virtualization tools, more and more open source solutions are entering the field. Some good examples of these would be WireMock and Hoverfly. These open source solutions have a number of advantages over paid-for alternatives:

  • Get started quickly. You can simply download and install them and get started right away. Documentation and examples are available online to get you up and running.
  • Possibly cheaper. There are no license fees involved, which for many organizations is a reason to choose open source instead of COTS solutions. Beware of the hidden costs of open source software, though! License fees aren’t the only factor to be considered.

Open source service virtualization solutions, too, have their drawbacks:

  • Limited set of features. The range of features they offer are generally more limited compared to their commercial counterparts. For example, both WireMock and Hoverfly can virtualize HTTP-based messaging systems, but do not offer support for other types of messaging, such as JMS or MQ.
  • Narrow scope. As a result if the above, these tools are geared towards a specific situation where they might be very useful, but they might not suffice when trying to do enterprise-wide implementation. For example, a HTTP-based service virtualization tool might have been implemented successfully for HTTP-based services, but when the team wants to extend service virtualization implementation to JMS or MQ services, that’s often not possible without either developing an adapter or adding another tool to the tool set.
  • Best-effort support. When you have a question regarding an open source tool, support is usually provided on a best effort basis, either by the creator(s) of the tool or by the user community. Especially for tools that are not yet commonplace, support may be hard to get by.

So, what to choose?
When given the choice, I’d follow these steps for making a decision between open source and commercial service virtualization and, more importantly, making it a success:

  1. Identify development and testing bottlenecks and decide whether service virtualization is a possible solution
  2. Research open source solution to see whether there is a tool available that enables you to virtualize the dependencies that form the bottleneck
  3. Compare the tools found to commercial tools that are also able to do this. Make sure to base this comparison on the Total Cost of Ownership (TCO), not just on initial cost!
  4. Select the tool that is the most promising for what you’re trying to achieve, do (in case of open source) or maybe request (in case of COTS) a proof of concept and evaluate whether the selected tool is fit for the job.
  5. Continue implementation and periodically evaluate your tool set to see whether the fit is still just as good (or better!).

If you have a story to tell concerning service virtualization tool selection and implementation, please do share it here in the comments!

8 thoughts on “Service virtualization: open source or commercial tooling?

  1. Pingback: Java Web Weekly, Issue 136 | Baeldung

  2. Hi Bas,

    Obviously I’m coming from a biased position here, but I think the advantages of OSS tools go beyond simply capital cost and lead time.

    All of the commercial vendors you mentioned charge high prices for their products and therefore have to sell primarily to C-level executives. A side effect of this is that ticking feature boxes takes precedence over UX and quality (I would argue this is true for a majority of enterprise software generally).

    As an example, although some of the commercial tools claim to support exotic protocols via “learning”, I’ve heard from folks who’ve tried it that this tends to be unusable in practice.

    OSS tools on the other hand only become popular if the people actually using them like them, so OSS project maintainers must focus heavily on developer usability ad productivity. A practical consequence of this is that TCO is lower both initially (due to zero up-front cost), but also in the long term due to higher productivity.

    Again, to support this with a practical example: Many commercial SV tools are entirely GUI driven i.e. have no (or at least incomplete) APIs. As projects mature and complexity scales up, this inevitably results in the need to scale human labour. This is essentially the same trap as manual regression testing.

    By contrast, WireMock exposes every function via its API, which means that complexity can be scaled via automation and abstraction in code, avoiding the need to devote ever more manual labour.

    • Hi Tom,

      first of all thanks for taking the time to read and comment on this piece. It’s very true that purchasing commercial tools requires management approval and therefore runs the risk of it becoming a ‘tick the feature boxes’ game. From my own (limited) experience with SV tool selection, however, COTS solutions still need to prove their worth on the work floor through extensive PoCs, just like their OSS counterparts. I’ve never been involved in an SV tool selection where the ultimate decision was made by management without solid argumentation from the engineers. Maybe that’s because the SV vendor I’ve worked with is more of an engineering club with a couple of sales people than the other way around, I don’t know..

      You make an excellent point about the ‘learning’ part. That’s basically what record and playback is in test automation: nice for protocol / application analysis purposes, but useless beyond that. See also this post where I discussed exactly this.

      Oh, and the COTS SV solution I’ve worked with does offer an API that allows me to set up, configure and take down virtual assets (mocks) in an automated manner, among other things. I’ve successfully used that in combination with test automation to create some very flexible testing. Yes, there’s a GUI that does that too, but all the GUI does is provide a slick way to consume that very API. Again, I don’t have enough experience with other SV solutions to make a general statement, so maybe I’ve only worked with the exception to the rule..

      Note that the above is in no way meant to try and void your arguments (again, they’re much appreciated and you’re probably way more experienced in the field than I am) but I think it’s only fair to share my own experiences with COTS SV here too 😉

      • I’m pretty sure I know which tool you’re referring to, and I agree it seems like they’ve managed to create a decent developer UX. However from the (admittedly limited) anecdotes I’ve collected about other mainstream SV tools, they seem to be exceptional in this respect.

        I’ve also mostly heard from folks who feel like they’ve had an SV solution pushed on them. Perhaps in some cases this is simply the tool being purchased before their tenure, rather than a lack of consultation. But arguably the effect is the same – the expensive licence practically obligates you to use it even when it has outgrown its usefulness.

        Another example that’s worth mentioning of how OSS can provide a superior experience is in the ability to scale down as well as up. MockServer, Hoverfly and WireMock can all be embedded into projects as libraries in addition to being deployable as servers. This avoids the need for deployment in the middle of the dev/check cycle (for some test activities at least), which can dramatically lower cycle time.

        • Sure, it’s not a secret I’ve worked with Virtualize a couple of times, if it was I wouldn’t have put it on my resume and my LinkedIn profile…

          I recognize what you’re saying about expensive tools being forced upon people just because the organization has purchased it somewhere in the past. I’ve seen this mostly with test automation tools, but the effect is the same.

          Your last point is very true, and that’s the reason I like working with open source tools (both in test automation and service virtualization) myself as well. I’m not sure I’d refer to one as superior to the other though, I think they both have their place. It’s all a matter of thinking before acting, and that’s exactly why I wrote this post, to try and give people some things to think about in case they’re in the position of choosing an SV tool to implement.

          Thanks again for the insight, this is giving me some things to think and talk about myself as well, also for the WireMock workshop we’ve discussed. On a very loosely related note: there’s a little report on service virtualization coming out next month, written by yours truly. If you look close enough you might spot the WireMock name as well as your own somewhere in there.. The report itself is completely tool-agnostic though, I’ll leave the promotional talk for the workshop 🙂

          • Yeah, I totally agree that context is everything. I certainly wouldn’t claim OSS is automatically the best choice for everyone.

            I look forward to reading your report when it’s published 🙂

            Incidentally, I was interviewed by a researcher from Forrester a couple of weeks ago so hopefully WireMock will be getting a mention in their next SV report.

  3. Hi guys –

    Good article – Tom’s point about how OSS tools offer the ability to scale down and up certainly resonates. Although it is an entirely different tool of course, I think Ansible is a good example of this: it is easy to adopt and useful for an individual with just a few servers to manage, but it can also be used at scale within a large organization. OSS options offer the advantage of providing an easy, low-cost introduction, and allow teams to experiment and evaluate the tool.

    The option of paying for commercial support for an open source tool once it has been evaluated and adopted might in some cases make the OSS route more attractive (subtle, but shameless plug :-).

    That said, the point about the limited scope of OSS tools is certainly a factor. Tools like WireMock and Hoverfly are designed to do one thing well. The limited scope can be a disadvantage of course, but there are many cases I’m aware of in which teams have been using only a tiny subset of the features available in a COTS product, meaning the TCO is not justified.

    Really, this just reinforces Bas’ point (number 5) about periodically reviewing the tool to see whether it is still a good fit.

Leave a Reply

Your email address will not be published. Required fields are marked *