Review: TestCon 2016 – speaking abroad for the first time

Last week I experienced another ‘first’ in my career as a consultant: my first speaking gig abroad. The event: TestCon 2016. Location: Vilnius, Lithuania. Another first there: I’d never been to Lithuania before! I wasn’t sure what to expect, but hey, the best things in life happen when you least expect them, so I was happy to jump to the opportunity when my friends at Parasoft called me to ask me if I could act as a stand-in for one of their guys who couldn’t make it to the conference.

The conference
TestCon 2016 was the first edition of this conference, but you wouldn’t be able to tell this if you hadn’t known that before. The event was very well organized, with a good venue (the University of Applied Sciences) and excellent speaker treatment. A separate room for speakers to prepare for their presentation and to wind down afterwards was a first for me, although granted, I am nowhere near an experienced public speaker yet… And don’t forget full travel and lodging expenses were covered upfront, something that a lot of conferences could learn from. Another remarkable feat is that the organization managed to attract around 600 (yes, six hundred) attendees to this first edition. For a relatively small country with no established testing community, that is absolutely amazing. I think a lot of other conference organizers would consider themselves extremely lucky to get such a turnout. The only thing that could use some improvement next year is the number of local speakers. Of the 25 speakers, only 3 or 4 were from Lithuania. In comparison, there were 6 from the Netherlands.. I talked to one of the organizers afterwards and we agreed that hopefully this first event, which was a major success, leads to more local speakers next year.

My talk
The talk that was originally proposed by Parasoft was called ‘Deploy and Destroy Complete Test Environments: Service Virtualization, Containers and Cloud’. As this is an area that interests me as well, and in which I have experience as well as have done some writing and speaking work before, I decided to keep it and construct a story based on my own experience around it. For those of you that are interested in what I talked about, you can see the slides here:

I think my talk went pretty well, although there wasn’t too much feedback or reaction from the audience. Later I heard that more speakers experienced this (but some didn’t), so it might not just have been me, at least I hope not. It confirmed my preference for delivering workshops rather than talks though, since you let a lot more interaction and feedback from workshops due to the smaller groups and the hands-on work rather than just me broadcasting information (or sound, at least). Still, I got some questions and had a couple of good discussions afterwards, so the overall feeling I have looking back on my talk is a positive one.

Looking back
As I said, this has been my first experience as a speaker abroad, but as far as I’m concerned it will not have been my last one. Travelling, speaking and meeting interesting and fun people has been a very rewarding experience, although an exhausting one as well. The conference itself couldn’t have been organized any better (except for a couple of minor details, maybe). Also, the organization and all volunteers I have had the pleasure of meeting couldn’t have been nicer and more welcoming, and Vilnius has been an interesting city to spend a couple of days in. I’m already looking forward to the next trip, even though it hasn’t been planned yet. I’ll try and make it a workshop gig as that’s where my interests and strengths are, I believe, but I won’t say no to delivering another talk, either.

Service virtualization: open source or commercial tooling?

I’ve been talking regularly about the benefits of introducing service virtualization to your software development and testing process. However, once you’ve decided that service virtualization is the way forward, you’re not quite there yet. One of the decisions that still needs to be made is the selection of a tool or tool suite.

A note: Following the ‘why?’-‘what?’-‘how?’ strategy of implementing new tools, tool selection (which is part of the ‘how?’) should only be done when it is clear why implementing service virtualization might be beneficial and what services are to be virtualized. A tool is a means of getting something done, nothing more..

As with many different types of tools, one of the choices to be made in the tool selection process is the question of purchasing a commercial tool or taking the open source route. In this post, I’d like to highlight some of the benefits and drawbacks of either option.

Commercial off the shelf (COTS) solutions
Many major tool vendors, including Parasoft, HP and CA, offer a service virtualization platform as part of their product portfolio. These tools are sophisticated solutions that are generally rich in features and possibilities. If you’d ask me, the most important benefits of going the commercial route would be:

  • Their support for a multitude of communication protocols and message standards. When you’re dealing with a lot of different types of messages, or with proprietary or uncommon message standards or communication protocols, commercial solutions offer a much higher chance of supporting everything you need to virtualize these out-of-the-box. This potentially saves a lot of time developing and maintaining a custom solution.
  • Support for different types of testing. Most commercial tools, next to their ability to simulate request/response behaviour, offer options to simulate behavior that is necessary to perform other types of tests. These options include random message dropping (to test resilience and error handling of your appliation under test) and setting performance characteristics (when virtual assets are to be included in performance testing).
  • Seamless integration with test tools. Commercial vendors often offer API and other functional testing tools that can be combined with their corresponding service virtualization solution to create effective and flexible testing solutions.
  • Extensive product support. When you have questions with regards to using these tools, support is often just a phone call away. Conditions are often formalized in the form of support and maintenance contracts.

As with everything, there are some drawbacks too when selecting a COTS service virtualization solution:

  • Cost. These solutions tend to be very expensive, both in license fees as well as consultancy fees required for successful implementation. This is the single biggest drawback for organizations to invest in service virtualization, as heard from friends that sell and implement these tools for a living.
  • Possible vendor lock-in. A lot of organizations prefer not to work with big tool vendors because it’s hard to migrate away if over time you decided to discontinue their tools. I have no first hand experience with this drawback, however.

Open source solutions
As an alternative to the aforementioned commercially licensed service virtualization tools, more and more open source solutions are entering the field. Some good examples of these would be WireMock and Hoverfly. These open source solutions have a number of advantages over paid-for alternatives:

  • Get started quickly. You can simply download and install them and get started right away. Documentation and examples are available online to get you up and running.
  • Possibly cheaper. There are no license fees involved, which for many organizations is a reason to choose open source instead of COTS solutions. Beware of the hidden costs of open source software, though! License fees aren’t the only factor to be considered.

Open source service virtualization solutions, too, have their drawbacks:

  • Limited set of features. The range of features they offer are generally more limited compared to their commercial counterparts. For example, both WireMock and Hoverfly can virtualize HTTP-based messaging systems, but do not offer support for other types of messaging, such as JMS or MQ.
  • Narrow scope. As a result if the above, these tools are geared towards a specific situation where they might be very useful, but they might not suffice when trying to do enterprise-wide implementation. For example, a HTTP-based service virtualization tool might have been implemented successfully for HTTP-based services, but when the team wants to extend service virtualization implementation to JMS or MQ services, that’s often not possible without either developing an adapter or adding another tool to the tool set.
  • Best-effort support. When you have a question regarding an open source tool, support is usually provided on a best effort basis, either by the creator(s) of the tool or by the user community. Especially for tools that are not yet commonplace, support may be hard to get by.

So, what to choose?
When given the choice, I’d follow these steps for making a decision between open source and commercial service virtualization and, more importantly, making it a success:

  1. Identify development and testing bottlenecks and decide whether service virtualization is a possible solution
  2. Research open source solution to see whether there is a tool available that enables you to virtualize the dependencies that form the bottleneck
  3. Compare the tools found to commercial tools that are also able to do this. Make sure to base this comparison on the Total Cost of Ownership (TCO), not just on initial cost!
  4. Select the tool that is the most promising for what you’re trying to achieve, do (in case of open source) or maybe request (in case of COTS) a proof of concept and evaluate whether the selected tool is fit for the job.
  5. Continue implementation and periodically evaluate your tool set to see whether the fit is still just as good (or better!).

If you have a story to tell concerning service virtualization tool selection and implementation, please do share it here in the comments!

Adding service virtualization to your Continuous Delivery pipeline

Lately I’ve been working on a couple of workshops and presentations on service virtualization and the trends in that field. One of these trends – integrating virtualized test environments, containerization and Continuous Delivery – is starting to see some real traction within the service virtualization realm. Therefore I think it might be interesting to take a closer look at it in this post.

What is service virtualization again?
Summarizing: service virtualization is a method to simulate the behaviour of dependencies that are required to perform tests on your application under test. You can use service virtualization to create intelligent and realistic simulations of dependencies that are not available, don’t contain the required test data or are otherwise access-restricted. I’ve written a couple of blog posts on service virtualization in the past, so if you want to read more, check those out here.

Service virtualization

Continuous Delivery and test environment challenges
The challenges presented by restricted access to dependencies during development and testing grow stronger when organizations are moving towards Continuous Delivery. A lack of suitable test environments for your dependencies can be a serious roadblock when you want to build, test and ultimately deploy software on demand and without human intervention. You can imagine that having virtual assets that emulate the behaviour of access-restricted components at your disposition – and more importantly: under your own control – is a significant improvement, especially when it comes to test execution and moving up the pipeline.

Using virtual test environments as artefacts in your pipeline
While having virtual test environments run at some server in your network is already a big step, ‘the next level’ when it comes to service virtualization as an enabler for Continuous Delivery is treating your virtual assets as artefacts that can be created, used and ultimately destroyed at will. I’ll talk about the specific benefits later, but first let’s see how we can achieve such a thing.

One way of treating your virtual test environments as CD artefacts is by containerizing them, for example using Docker. With Docker, you can for example create an image that contains your service virtualization engine, any virtual assets you would like to use and – if applicable – data sources that contain the necessary test data to emulate the required behaviour.

Another way of creating virtual test environments on demand is by hosting them using an on-demand cloud-based service such as Microsoft Azure. This allows you to spin up a simulated test environment when required, use and abuse it and destroy it after you are (or your deployment process) is done.

At the time of writing this blog post, I’m preparing a workshop where we demonstrate – and have the participants work with – both of these approaches, and while the technology is still pretty new, I think this is an interesting and exciting way of taking (or regaining) full control of your test environments and ultimately test faster, more and better.

Adding containerized service virtualization to your continuous delivery pipeline

Benefits of containerized test environments
Using containerized test environments in your Continuous Delivery process has a couple of interesting advantages for your development and testing activities:

  • As said, test environments can be spun up on demand, so no more waiting until that dependency you need is available and configured like you need it to be.
  • You don’t need to worry about resetting the test environment to its original state after testing has completed. Just throw the test environment away and spin up a new instance ready for your next test run.
  • Having the same (virtual) test environment at your disposal every time makes it easier to reproduce and analyze bugs. No more ‘works on my machine’!

Further reading
Want to read more about containerized service virtualization? See for an example specific to Parasoft Virtualize this white paper. And keep an eye out for other service virtualization tool vendors, such as HP and CA. They’re likely working on something similar as well, if they haven’t released it already.

Also, if you’re more of a hands-on person, you can apply containerization to open source service virtualization solutions such as HoverFly yourself.