The tool is not important

One of the not-so-obvious reasons I spend a lot (too much?) time on LinkedIn is because it regularly provides me with inspiration for thinly veiled rants (also know as blog posts). This one is no exception. Here’s the question that caused this specific blog post to happen:

If a functional tester needs to learn automation, which tool (or programming language) would you recommend and why?

I’ll just forget about this ‘functional tester’ and why he or she ‘needs’ to learn automation (there is a gazillion other ways to contribute to quality software). No, my main gripe with this question is one that I’ve been talking about a lot on this blog, on other media as well as in person: it directly zooms in on the ‘how?’, skipping over the far more important questions of ‘why?’ and ‘what?’ to automate.

Note that I’m not blaming the person who asked the question, he probably meant well and really wanted an answer to his question. And yes, he got a lot of answers, most of them pointing him to Selenium and/or Java. No idea why, since there’s absolutely no context provided with regards to the application for which automated tests need to be written, the skill set of the people who are going to be made responsible for the automation, the overall software development process and lastly but probably most importantly, whether or not there is a need for automation at all.

Granted, I’m taking the question out of context here probably, or at least I’m way overthinking it, but the relentless focus on tools is a real problem in the test automation space, and one that needs fixing. I see it in questions like these, but (and that’s a far bigger problem) in job openings and project offers too. People contact me and ask me to help them introduce and set up test automation, but they’ve already made the decision to go with tool X and Y. The reasoning? ‘Well, we’ve got a couple of spare licenses for those tools’, or ‘Benny from accounting saw a demo and thought it was cool’, The stupid is strong in that one.

So, let me say this once more:

The. Tool. Is. Not. Important.

What you want to do with it is, though. And that’s the question that’s still too often forgotten. Funny thing is, other crafts mastered this a long time ago. For instance, would you trust a handyman that does not ask what needs to be done and why you want something done in the first place, but instead says ‘See you tomorrow at eight. I’ll bring my hammer.’? It could be that he’s psychic and knows you need some woodwork done. In that case, let’s hope he brings a ruler, a saw and some nails too. Or you probably won’t hire him because you’ve got a hunch that a hammer might not be all too useful for that paint job you need done. I think. I’m not really into DIY. But I hope you get my point anyway.

So why is it that we’re still directly jumping to conclusions on which tool to use, bring, buy or build when it comes to test automation? Wouldn’t it be far better if we asked why we need automation in the first place, and what exactly it is that needs to be automated, and in what way? I think it would lead to far better and more effective automation, and save the craft and the software development world of a lot of useless automation efforts.

Only after the ‘why?’ and the ‘what?’ has been covered and answered, it’s time to look at the ‘how?’. And that’s when the tool becomes important. Until then, it’s not.

Lessons learned while training

Recently, I’ve had the opportunity to deliver a couple of workshops to various groups of people. First, there was the REST Assured workshop I hosted at the Romanian Testing Conference, then after that two separate versions of my ‘test automation awareness’ training. I can now safely say that I enjoy teaching hugely, and that it is something I want to pursue further in the future.

Of course, even though I’ve been in the whole workshop and teaching business for a while now, there’s still a whole lot to learn. I’ve delivered (tool-specific) automation training for my previous employer, and since I’ve taken the freelance route, several opportunities have come my way as well, but I’m still not the teaching master I’d like to be. There’s still so much to learn, and for me, one of the best ways to learn is to reflect on and write about my experiences. So, that’s what I’m going to do here… Who knows, maybe these experiences are valuable to others as well. Let’s take a look at some of the lessons I’ve learned in my teaching efforts so far.

Tool-specific workshops are popular
Most of the requests I see or hear about are for workshops that cover one specific tool, be it on an introductory or an advanced level. The most in demand at the moment is Selenium, but I’ve been receiving multiple requests for REST Assured workshops as well. At the moment, I only offer these in house, in person, so I have to decline most of them, unfortunately. It’s highly unlikely that one individual requesting training is able to cover travel, lodging and my training fee. This means I’m mostly delivering training in the Netherlands (where you can get everywhere in an hour or two), or occasionally at a conference abroad (where occasionally means once, so far).

Of course, most training requests don’t even reach me, since I don’t offer training preparing for specific certifications (ISTQB, CAT, PSM, you name it), nor do I have the desire to start doing so. Test automation is my game, and I’d like to stick to that. And when people think of automation, they tend to think in terms of specific tools. How I feel about that? Well…

Tool-specific workshops can be good, but…
While I think it’s very useful to attend workshops and classes to learn either the basics or more advanced features of specific tools (again, Selenium being the most popular), I feel there’s something missing. In my opinion, even the best tool workshops are useless in the long term if they’re attended by people that are unaware of the ‘why?’ to use the tool (or automation in general) in the first place. Good tool-specific workshops teach you this. Not all of them do.

The risk you run as an attendee, or as an organization sending a group of your employees to a tool training that fails to provide the necessary background and context, is that you’re likely to end up with people that have been given a shiny new hammer and start to think that suddenly, everything is a nail. Not good.

Again, I’m not saying that all tool-specific workshops are like that, but at least some of them are. I know, because I’ve delivered them as well in the past. I’m still learning, too.

Tool-agnostic workshops work well. For the right audience.
As a counter-initiative to the aforementioned tool-specific workshops, I’ve started to develop, promote and deliver a ‘test automation awareness’ workshop. In this workshop, I teach some of the principles behind test automation and try to debunk common myths. After the workshop is over, I (hopefully) have achieved two things with this workshop:

  • Teach people that test automation is a craft, requiring skills that need to be developed, and principles that need to be adhered to.
  • Give people a solid basis for asking the right questions once they enter a tool-specific training. With this awareness workshop, I’d like to answer the ‘why?’ and the ‘what?’ of automation, so that they can safely move on to the ‘how?’ in, for example, a Selenium workshop.

After having delivered my awareness workshop a couple of times now, in different formats and for different audiences, I’ve learned a couple of valuable things that will definitely help me improve it further:

  • The workshop works best for people that have had some prior exposure to automation. I’ve presented the subject, the principles and my trains of thought to several audiences, ranging from business analysts and project managers to experienced testers, and the people that aren’t working in and with automation on a regular basis tend to zone out after a while. For those people, a (half) hour presentation might work better. For testers (and probably for developers, too), it has worked out nicely so far.
  • You can offer people an exercise or two teaching them something about automation without there being programming involved. And I’m not talking about codeless automation tools. It has taken a bit more work and imagination, but I’ve come up with some exercises that have people think about proper automation implementation and discussing this among themselves without putting them in front of a keyboard. Again, there’s a lot to be covered related to the ‘why?’ and ‘what?’.
  • There is definitely a need for workshops like these. I’d gauged that from the popular Lego Automation workshop offered by the Ministry of Testing, but after a couple of runs of my own workshop and the feedback received both during and after, I can confirm that there IS a market for training that provides some realism with regards to automation.

As the year moves on, I’ll be working on improving my current training offerings and developing new ones. I’ve got some dates lined up already, but there is always room for more. Feel free to contact me with feedback, ideas for training or opportunities!

Is your user interface-driven test automation worth the effort?

I don’t know what’s happening on your side of things, dear reader, but in the projects I’m currently working on, and those that I have been working on in the recent past, there’s been a big focus on implementing user interface-driven test automation, almost always using some implementation of Selenium WebDriver (be it Java, C# or JavaScript). While that isn’t a bad thing in itself (I think Selenium is a great and powerful tool), I’m sometimes wondering whether all the effort that is being put into creating, stabilizing and maintaining these scripts is worth the effort in the end.

Recently, I’ve been thinking and talking about this question especially often, either when teaching different forms of my test automation awareness workshop, giving a talk on trusting your automation or just evaluating and thinking about my own work and projects. Yes, I too am sometimes guilty of getting caught up in the buzz of creating those perfectly stable, repeatable and maintainable Selenium tests, spending hours or sometimes even days on getting it right, thereby losing sight of the far more important questions of ‘why am I creating this test in the first place?’ and ‘will this test pay me back for the effort that I’m putting into creating it?’.

Sure, there are plenty of valid applications for user interface-driven tests. Here’s a little checklist that might be of use to you. Feel free to critique or add to it in the comments or via email. In my opinion, it is likely you’re creating a valuable test if all of these conditions apply:

  • The test simulates how an end user or customer interacts with the application and receives feedback from it (example: user searches for an item in a web shop, adds it to the cart, goes through checkout and receives feedback on the successful purchase)
  • There’s significant risk associated with the end user not being able to complete the interaction (example: not being able to complete a purchase and checkout leads to loss of revenue)
  • There’s no viable alternative available through which to perform the interaction (example: the web shop might provide an API that’s being called by the UI throughout the process, but this does not allow you to check that the end user is able to perform said interaction via the user interface. Web shop customer typically do not use APIs for their shopping needs..)
  • The test is repeatable (without an engineer having to intervene with regards to test environments or test data)

Checking all of the above boxes is no guarantee for a valuable user interface-driven test, but I tend to think it is far more likely you’re creating one if it does.

At the other end of the spectrum, a lot of useless (or ‘less valuable’ if you want the PC version) user interface-driven tests are created. And there’s more than one type of ‘useless’ here:

  • Tests that use the UI to test business logic that’s exposed through an API (use an API-level test instead!) or implemented in code (how about those unit tests?). Not testing at the right level supports shallow feedback and increased execution time. Goodbye fast feedback.
  • Tests that are unreliable with regards to execution and result consistency. ‘Flaky’ is the word I see used a lot, but I prefer ‘unreliable’ or ‘untrustworthy’. ‘Flaky’ sounds like snow to me. And I like snow. I don’t like unreliable tests, though. And user interface-driven tests are the tests that are most likely to be unreliable, in my experience.

What it boils down to is that these user interface-driven tests are by far the hardest to implement correctly. There’s so much to be taken care of: waiting for page loads and element state, proper exception handling, test data and test environment management. Granted, those last two are not limited to just this type of tests, but I find that people that know how to work on the unit or API level are also far more likely to be able to work with mocks, stubs and other simulations to deal with issues related to test data or test environments.

Here’s a tweet by Alan Page that recently appeared in my timeline and that sums it all up pretty well:

So, having read this post, are you still sure that all these hours you’re putting into creating, stabilizing and maintaining your Selenium tests are worth it in the end? If so, I tip my hat to you. But for the majority of people working on user interface-driven tests (again, including myself), it wouldn’t hurt to take a step back every now and then, lose the ‘have to get it working’ tunnel vision and think for a while whether your tests are actually delivering enough value to justify the efforts put into creating them.

So, are your UI automation efforts worth it?