An experiment in creating better tool-centered automation training

Last week, I delivered the second part of a two-evening API testing training at my former employer. They contacted me a while ago to see if I could help them in offering test automation training for their employees, as well as for their clients and other contacts. When I was still working with them, I used to deliver automation training as well, and it felt really great to be asked back, even though I have left them almost three years ago now.

But that’s not what this post is about.

This API testing training was in some ways an experiment I have wanted to conduct for a while now. I see a lot of individuals and organizations offering automation training, and most of it is specific to one single tool. In itself, this isn’t inherently a bad thing, but I’ve got one big problem with a lot of these tool-centered courses: instead of teaching you how to create a sound automation approach around one or more tools, they simply go through the most important features of a single tool and teach the participants some (sometimes useful) tricks. I know. I’ve delivered those courses in the past as well. If you’re a tool vendor / creator, I can understand why you would want to do that. But I think there’s more to good automation education than teaching you all the ins and outs of a specific tool. Let’s call them the 3 C’s:

  • Context – A tool is often only useful in a specific context. This context includes the skills of the people that will use the tool, the development and delivery process that the tool is to be made part of, and much more. Without context, it’s very hard to decide if a specific tool is the right one for the job.
  • Competition – For nearly all tools out there, there’s at least one competitor on the market (but often much more) that can be used to complete the same task. It is therefore essential for good training to introduce more than one option, have the participants get some hands on experience with all of them and let them decide what would work best for their tasks and team.
  • Cutting the crap – Tool-specific training might give the idea that the tool the participants are being trained in is the best thing since sliced bread. Which in turn leads people to try and automate anything and everything with a single tool. Which in turn all too often leads to crap. In other words: what’s the point in knowing all different types of waits available in Selenium if you don’t know how to decide what is a good scenario to automate using Selenium in the first place?

So, instead of delivering my API testing training around REST Assured alone (which I’ve done a number of times in the past), I decided to introduce three different tools to the participants: REST Assured, the open source version of SmartBear SoapUI and Parasoft SOAtest. After an introduction into what constitutes API testing, why it is useful and what you can test using APIs, I let the participants create a number of basic API tests with each of these tools (pretty much the same tests three times over), so they could experience firsthand how the features provided by each of the tools compare. Moreover, since I chose tools that are at opposite ends of the API test tool spectrum (REST Assured is a Java library for RESTful APIs, SOAtest is a commercially licensed enterprise-grade tool that supports a wide variety of protocols and message types, with SoapUI somewhere in between), participants get a much broader view of API testing than they’d get by learning REST Assured alone.

The feedback I received afterwards confirmed what I hoped to achieve with my experiment: all the attendees thought it was great to see more than a single tool, and since I gave them pointers to material for further exploration, they could decide for themselves in which direction their further education will take them.

I recently launched another course in which I try to do something similar, although in another fashion: instead of teaching people how to use Selenium WebDriver (i.e., teaching them the API and some useful Selenium-only tricks), I explain what types of tests should be created using Selenium, and I teach them how I would approach creating readable, maintainable and reliable tests with Selenium, Cucumber/SpecFlow, JUnit/NUnit and ExtentReports. Again: providing context and cutting the crap (you can argue about whether or not I’m covering ‘competition’ in this one) instead of teaching people all of the methods and features of a single tool.

I hope to deliver this type of automation training much more often in the future, and I’d love to see other automation training providers follow suit. For those of you who’d like some more details on the training objectives and subjects covered in this API testing training, please click here.

As always, I’d love to hear your thoughts. What to you constitutes good automation training?

Why and how I still use the test automation pyramid

Last week, while delivering part one of a two-evening API testing course, I found myself explaining the benefits of writing automated tests at the API level using the test automation pyramid. That in itself probably isn’t too noteworthy, but what immediately struck me as odd is that I found myself apologizing to the participants that I used a model that has received so many criticism as the pyramid.

Odd, because

  1. Half of the participants hadn’t even heard of the test automation pyramid before
  2. The pyramid, as a model, to me is still a very useful way for me to explain a number of concepts and good practices related to test automation.

#1 is a problem that should be tackled by better education around software testing and test automation, I think, but that’s not what I wanted to talk about in this blog post. No, what I would like to show is that, at least to me, the test automation pyramid is still a valuable model when explaining and teaching test automation, as long as it’s used in the right context.

The version of the test automation pyramid I tend to use in my talks

The basis of what makes the pyramid a useful concept to me is the following distinction:

It is a model, not a guideline.

A guideline is something that’s (claiming to be) correct, under certain circumstances. A model, as the statistician George Box said, is always wrong, but some models are useful. To me, this applies perfectly to the test automation pyramid:

There’s more to automation than meets the UI
The test automation pyramid, as a model, helps me explain to less experienced engineers that there’s more to test automation than end-to-end tests (those often driven through the user interface). I explain this often using examples from real life projects, where we chose to do a couple of end-to-end tests to verify that customers could complete a specific sequence of actions, combined with a more extensive set of API tests to verify business logic at a lower level, and why this was a much more effective approach than testing everything through the UI.

Unit testing is the foundation
The pyramid, as a model, perfectly supports my belief that a solid unit testing strategy is the basis for any successful, significantly-sized test automation effort. Anything that can be covered in unit tests should not have to be covered again in higher level tests, i.e., at the integration/API or even at the end-to-end level.

E2E and UI tests are two different concepts
The pyramid, as a model, helps me explain the difference between end-to-end tests, where the application as a whole is exercised from top (often the UI) to bottom (often a database), and user interface tests. The latter may be end-to-end tests, but unbeknownst to surprisingly many people you can write unit tests for your user interface just as well.There’s a reason the top layer of the pyramid that I use (together with many others) says ‘E2E’, not ‘UI’…

Don’t try to enforce ratios between test automation scope levels
The pyramid, when used as a guideline, can lead to less than optimal test automation decisions. This mainly applies to the ratio between the number of tests in each of the E2E, integration and unit categories. Even though well though through automation suites will naturally steer towards a ratio of more unit tests than integration tests and more integration tests than E2E tests, it should never be forced to do so. I’ve even seen some people, which unfortunately were the ones in charge, make decisions on what and how to automate based on ratios. Some even went as far as saying ‘X % of our automated tests HAVE TO be unit tests’. Personally, I’d rather go for the ratio that delivers in terms of effectiveness and time needed to write and maintain the tests instead.

Test automation is only part of the testing story
‘My’ version of the test automation pyramid (or at least the version I use in my presentations) prominently features what I call exploratory testing. This helps remind me to tell those that are listening that there’s more to testing than automation. I usually call this part of the testing story ‘exploratory testing’, because this is the part where humans explore and evaluate the application under test to inform themselves and others about aspects of its quality. This is what’s often referred to as ‘manual testing’, but I don’t like that term.

As you can see, to me, the test automation pyramid is still a very valuable model (and still a useless guideline) when it comes to me explaining my thoughts on automation, despite all the criticism it has received over the years. I hope I never find myself apologizing for using it again in the future..

On spreading myself (too?) thin

As some of you might have seen, I’ve been doing quite a bit of writing in the last couple of months. Next to a weekly blog post for this website, I’ve been busy writing several articles for TechBeacon and StickyMinds, as well as a couple of one-offs for other sites. Next to that, I’ve also reviewed a couple of chapters for a book, I’m preparing another webinar and I’ll be delivering several training courses and talks in the next couple of months, so I’m spending time preparing those too. And there’s this thing called ‘client work’ that takes up a lot of time as well..

Needless to say, it’s a challenge sometimes to get everything done and still stick to my own quality standards. So much so, that I’ve finally realized it might be a good idea to back down a little. That’s hard for me, because I love doing all the things I’m allowed to do and I like to do as much of them as possible. But I feel that slowly, I’m starting to compromise on quality, and that’s the exact opposite of what I stand for.

Maybe even more important, all that writing and other stuff works with quite narrow deadlines, which on the one hand is a blessing for a world class procrastinator like myself, but on the other hand, it also is the main cause of me putting the urgent (the work with the short deadlines) before the important (the work that I really feel needs to be done). Because there’s always another blog post or article to write, or a call to do, there’s no time for the deep work required to create some of the things I want to create.

Some of you might know what I’m referring to, because I’ve been in touch with a number of people to discuss the idea: I’d love to do something about the way test automation is taught in courses. See this blog post or a more elaborate explanation of what I mean by that statement. But creating a course that covers everything I think should be covered will not be easy, nor will it be fast. But since it’s something I feel so strongly about, it’s worth it to carve out the required time from my schedule. Even if that does mean disappointing some people, or saying ‘no’ to requests or invitations.

And there’s another, albeit smaller, course related to automation (and to Selenium in particular) that I’d like to see published as well. So there’s time needed for that too. That means I’ll have to be a little more careful with my time and planning, something I never really had to do before. So, in a way, this will be a good lesson for me as well.

I’ll still try to write a blog post every week, but if needs must, I might have to break that promise to myself as well. I’ll keep you posted.