On writing and publishing my first ebook

Sometimes, some of the most interesting things in life happen when you least expect them. Just over half a year ago now (I looked it up, it was on May 11th of this year, to be exact) I received an email from Brian at O’Reilly Media, asking if I was interested in writing a short book on service virtualization. I didn’t have to think long about an answer and replied ‘yes’ the same day. After almost six months, lots of writing, reviewing and editing, many, many emails and a couple of video calls I am very proud to present to you my first ever ebook:

Service virtualization ebook

In this post, I’d like to tell you a little more about the book and about the process of writing and editing such a piece. Even though the book is relatively short (HPE, who’s sponsoring the book, set an upper limit of 25 pages of actual content), we went through much the same process as a full-length book would require, from proposal to production and everything in between.

The book
So, first, let’s take a look at the most important part: the end result. What we were aiming for was to give an overview of the current state of the service virtualization field and how this technique plays a role (or at least can play a role) in current and future IT trends. I won’t summarize the whole book here (it’s short enough so you can read it in about an hour) but if you want to know how service virtualization and Continuous Delivery can work together, or how you can leverage service virtualization when testing Internet of Things-applications, you’re cordially invited to read this book. It’s available free of charge from the HPE website, so why not take a look?

The writing process
After that initial email I received back in May, a lot has been taking place. Writing a piece like this starts with writing a proposal summarizing the prospective book outline, the reason why this book should be written, who is the target audience, and why the person writing the book proposal (i.e., me) thinks he or she is the right person for the job. This proposal is used to convince the sponsor (as I said, HPE, in this case) that they’re investing their money and effort wisely.

When the proposal is accepted, the actual writing starts. This is what takes up most of the time, but I think that goes without saying. We set two deadlines from the start: one date where a draft version of around 50% of the book should be delivered (to gauge whether the writer is on the right track and to keep things moving) and of course a deadline date for the first full draft.

As anybody who has ever written a book knows, once the first full draft is delivered, you’re not there yet. Not even close! An extensive reviewing and editing process has taken place to remove any spelling and grammatical errors, to improve the flow of the book and to make sure that all contents matched the expectations of HPE, of O’Reilly and last but not least of myself. This took a little longer than I initially thought it would, but then again, the end result is so much better than I could have produced on my own, so it has been very well worth the effort.

Thoughts
Would I do it again? You bet I would! I have thoroughly enjoyed the process of proposing, writing, reviewing and editing this book, even though at times it has been hard to review the same piece of text for the umpteenth time. Also, the guys and girls from O’Reilly, who have worked just as hard as I have myself (if not harder) to get this book out there, have been nothing less than fantastic to work with. So, Brian, Virginia, thanks so much, it was awesome working with you and I look forward to doing this again in some way, shape or form in the future. I also learned quite a few interesting things on the English language and editing standards. Since I’m a guy who’s always looking to improve his English skills, this has been quite invaluable too.

So if you’re ever in the position where you’re asked to write a book, or if you’ve ever thought about writing one yourself, I can wholeheartedly recommend going for it. Not only will you have something that you can be proud of once you’re finished, but you’ll learn so many things in the process.

Oh, and again, if you’re interested in a quick read on the current state of service virtualization, you can download the book for free from here. I’d love to hear your thoughts on it.

Service virtualization: open source or commercial tooling?

I’ve been talking regularly about the benefits of introducing service virtualization to your software development and testing process. However, once you’ve decided that service virtualization is the way forward, you’re not quite there yet. One of the decisions that still needs to be made is the selection of a tool or tool suite.

A note: Following the ‘why?’-‘what?’-‘how?’ strategy of implementing new tools, tool selection (which is part of the ‘how?’) should only be done when it is clear why implementing service virtualization might be beneficial and what services are to be virtualized. A tool is a means of getting something done, nothing more..

As with many different types of tools, one of the choices to be made in the tool selection process is the question of purchasing a commercial tool or taking the open source route. In this post, I’d like to highlight some of the benefits and drawbacks of either option.

Commercial off the shelf (COTS) solutions
Many major tool vendors, including Parasoft, HP and CA, offer a service virtualization platform as part of their product portfolio. These tools are sophisticated solutions that are generally rich in features and possibilities. If you’d ask me, the most important benefits of going the commercial route would be:

  • Their support for a multitude of communication protocols and message standards. When you’re dealing with a lot of different types of messages, or with proprietary or uncommon message standards or communication protocols, commercial solutions offer a much higher chance of supporting everything you need to virtualize these out-of-the-box. This potentially saves a lot of time developing and maintaining a custom solution.
  • Support for different types of testing. Most commercial tools, next to their ability to simulate request/response behaviour, offer options to simulate behavior that is necessary to perform other types of tests. These options include random message dropping (to test resilience and error handling of your appliation under test) and setting performance characteristics (when virtual assets are to be included in performance testing).
  • Seamless integration with test tools. Commercial vendors often offer API and other functional testing tools that can be combined with their corresponding service virtualization solution to create effective and flexible testing solutions.
  • Extensive product support. When you have questions with regards to using these tools, support is often just a phone call away. Conditions are often formalized in the form of support and maintenance contracts.

As with everything, there are some drawbacks too when selecting a COTS service virtualization solution:

  • Cost. These solutions tend to be very expensive, both in license fees as well as consultancy fees required for successful implementation. This is the single biggest drawback for organizations to invest in service virtualization, as heard from friends that sell and implement these tools for a living.
  • Possible vendor lock-in. A lot of organizations prefer not to work with big tool vendors because it’s hard to migrate away if over time you decided to discontinue their tools. I have no first hand experience with this drawback, however.

Open source solutions
As an alternative to the aforementioned commercially licensed service virtualization tools, more and more open source solutions are entering the field. Some good examples of these would be WireMock and Hoverfly. These open source solutions have a number of advantages over paid-for alternatives:

  • Get started quickly. You can simply download and install them and get started right away. Documentation and examples are available online to get you up and running.
  • Possibly cheaper. There are no license fees involved, which for many organizations is a reason to choose open source instead of COTS solutions. Beware of the hidden costs of open source software, though! License fees aren’t the only factor to be considered.

Open source service virtualization solutions, too, have their drawbacks:

  • Limited set of features. The range of features they offer are generally more limited compared to their commercial counterparts. For example, both WireMock and Hoverfly can virtualize HTTP-based messaging systems, but do not offer support for other types of messaging, such as JMS or MQ.
  • Narrow scope. As a result if the above, these tools are geared towards a specific situation where they might be very useful, but they might not suffice when trying to do enterprise-wide implementation. For example, a HTTP-based service virtualization tool might have been implemented successfully for HTTP-based services, but when the team wants to extend service virtualization implementation to JMS or MQ services, that’s often not possible without either developing an adapter or adding another tool to the tool set.
  • Best-effort support. When you have a question regarding an open source tool, support is usually provided on a best effort basis, either by the creator(s) of the tool or by the user community. Especially for tools that are not yet commonplace, support may be hard to get by.

So, what to choose?
When given the choice, I’d follow these steps for making a decision between open source and commercial service virtualization and, more importantly, making it a success:

  1. Identify development and testing bottlenecks and decide whether service virtualization is a possible solution
  2. Research open source solution to see whether there is a tool available that enables you to virtualize the dependencies that form the bottleneck
  3. Compare the tools found to commercial tools that are also able to do this. Make sure to base this comparison on the Total Cost of Ownership (TCO), not just on initial cost!
  4. Select the tool that is the most promising for what you’re trying to achieve, do (in case of open source) or maybe request (in case of COTS) a proof of concept and evaluate whether the selected tool is fit for the job.
  5. Continue implementation and periodically evaluate your tool set to see whether the fit is still just as good (or better!).

If you have a story to tell concerning service virtualization tool selection and implementation, please do share it here in the comments!

Adding service virtualization to your Continuous Delivery pipeline

Lately I’ve been working on a couple of workshops and presentations on service virtualization and the trends in that field. One of these trends – integrating virtualized test environments, containerization and Continuous Delivery – is starting to see some real traction within the service virtualization realm. Therefore I think it might be interesting to take a closer look at it in this post.

What is service virtualization again?
Summarizing: service virtualization is a method to simulate the behaviour of dependencies that are required to perform tests on your application under test. You can use service virtualization to create intelligent and realistic simulations of dependencies that are not available, don’t contain the required test data or are otherwise access-restricted. I’ve written a couple of blog posts on service virtualization in the past, so if you want to read more, check those out here.

Service virtualization

Continuous Delivery and test environment challenges
The challenges presented by restricted access to dependencies during development and testing grow stronger when organizations are moving towards Continuous Delivery. A lack of suitable test environments for your dependencies can be a serious roadblock when you want to build, test and ultimately deploy software on demand and without human intervention. You can imagine that having virtual assets that emulate the behaviour of access-restricted components at your disposition – and more importantly: under your own control – is a significant improvement, especially when it comes to test execution and moving up the pipeline.

Using virtual test environments as artefacts in your pipeline
While having virtual test environments run at some server in your network is already a big step, ‘the next level’ when it comes to service virtualization as an enabler for Continuous Delivery is treating your virtual assets as artefacts that can be created, used and ultimately destroyed at will. I’ll talk about the specific benefits later, but first let’s see how we can achieve such a thing.

One way of treating your virtual test environments as CD artefacts is by containerizing them, for example using Docker. With Docker, you can for example create an image that contains your service virtualization engine, any virtual assets you would like to use and – if applicable – data sources that contain the necessary test data to emulate the required behaviour.

Another way of creating virtual test environments on demand is by hosting them using an on-demand cloud-based service such as Microsoft Azure. This allows you to spin up a simulated test environment when required, use and abuse it and destroy it after you are (or your deployment process) is done.

At the time of writing this blog post, I’m preparing a workshop where we demonstrate – and have the participants work with – both of these approaches, and while the technology is still pretty new, I think this is an interesting and exciting way of taking (or regaining) full control of your test environments and ultimately test faster, more and better.

Adding containerized service virtualization to your continuous delivery pipeline

Benefits of containerized test environments
Using containerized test environments in your Continuous Delivery process has a couple of interesting advantages for your development and testing activities:

  • As said, test environments can be spun up on demand, so no more waiting until that dependency you need is available and configured like you need it to be.
  • You don’t need to worry about resetting the test environment to its original state after testing has completed. Just throw the test environment away and spin up a new instance ready for your next test run.
  • Having the same (virtual) test environment at your disposal every time makes it easier to reproduce and analyze bugs. No more ‘works on my machine’!

Further reading
Want to read more about containerized service virtualization? See for an example specific to Parasoft Virtualize this white paper. And keep an eye out for other service virtualization tool vendors, such as HP and CA. They’re likely working on something similar as well, if they haven’t released it already.

Also, if you’re more of a hands-on person, you can apply containerization to open source service virtualization solutions such as HoverFly yourself.