Why and how I still use the test automation pyramid

Last week, while delivering part one of a two-evening API testing course, I found myself explaining the benefits of writing automated tests at the API level using the test automation pyramid. That in itself probably isn’t too noteworthy, but what immediately struck me as odd is that I found myself apologizing to the participants that I used a model that has received so many criticism as the pyramid.

Odd, because

  1. Half of the participants hadn’t even heard of the test automation pyramid before
  2. The pyramid, as a model, to me is still a very useful way for me to explain a number of concepts and good practices related to test automation.

#1 is a problem that should be tackled by better education around software testing and test automation, I think, but that’s not what I wanted to talk about in this blog post. No, what I would like to show is that, at least to me, the test automation pyramid is still a valuable model when explaining and teaching test automation, as long as it’s used in the right context.

The version of the test automation pyramid I tend to use in my talks

The basis of what makes the pyramid a useful concept to me is the following distinction:

It is a model, not a guideline.

A guideline is something that’s (claiming to be) correct, under certain circumstances. A model, as the statistician George Box said, is always wrong, but some models are useful. To me, this applies perfectly to the test automation pyramid:

There’s more to automation than meets the UI
The test automation pyramid, as a model, helps me explain to less experienced engineers that there’s more to test automation than end-to-end tests (those often driven through the user interface). I explain this often using examples from real life projects, where we chose to do a couple of end-to-end tests to verify that customers could complete a specific sequence of actions, combined with a more extensive set of API tests to verify business logic at a lower level, and why this was a much more effective approach than testing everything through the UI.

Unit testing is the foundation
The pyramid, as a model, perfectly supports my belief that a solid unit testing strategy is the basis for any successful, significantly-sized test automation effort. Anything that can be covered in unit tests should not have to be covered again in higher level tests, i.e., at the integration/API or even at the end-to-end level.

E2E and UI tests are two different concepts
The pyramid, as a model, helps me explain the difference between end-to-end tests, where the application as a whole is exercised from top (often the UI) to bottom (often a database), and user interface tests. The latter may be end-to-end tests, but unbeknownst to surprisingly many people you can write unit tests for your user interface just as well.There’s a reason the top layer of the pyramid that I use (together with many others) says ‘E2E’, not ‘UI’…

Don’t try to enforce ratios between test automation scope levels
The pyramid, when used as a guideline, can lead to less than optimal test automation decisions. This mainly applies to the ratio between the number of tests in each of the E2E, integration and unit categories. Even though well though through automation suites will naturally steer towards a ratio of more unit tests than integration tests and more integration tests than E2E tests, it should never be forced to do so. I’ve even seen some people, which unfortunately were the ones in charge, make decisions on what and how to automate based on ratios. Some even went as far as saying ‘X % of our automated tests HAVE TO be unit tests’. Personally, I’d rather go for the ratio that delivers in terms of effectiveness and time needed to write and maintain the tests instead.

Test automation is only part of the testing story
‘My’ version of the test automation pyramid (or at least the version I use in my presentations) prominently features what I call exploratory testing. This helps remind me to tell those that are listening that there’s more to testing than automation. I usually call this part of the testing story ‘exploratory testing’, because this is the part where humans explore and evaluate the application under test to inform themselves and others about aspects of its quality. This is what’s often referred to as ‘manual testing’, but I don’t like that term.

As you can see, to me, the test automation pyramid is still a very valuable model (and still a useless guideline) when it comes to me explaining my thoughts on automation, despite all the criticism it has received over the years. I hope I never find myself apologizing for using it again in the future..

On spreading myself (too?) thin

As some of you might have seen, I’ve been doing quite a bit of writing in the last couple of months. Next to a weekly blog post for this website, I’ve been busy writing several articles for TechBeacon and StickyMinds, as well as a couple of one-offs for other sites. Next to that, I’ve also reviewed a couple of chapters for a book, I’m preparing another webinar and I’ll be delivering several training courses and talks in the next couple of months, so I’m spending time preparing those too. And there’s this thing called ‘client work’ that takes up a lot of time as well..

Needless to say, it’s a challenge sometimes to get everything done and still stick to my own quality standards. So much so, that I’ve finally realized it might be a good idea to back down a little. That’s hard for me, because I love doing all the things I’m allowed to do and I like to do as much of them as possible. But I feel that slowly, I’m starting to compromise on quality, and that’s the exact opposite of what I stand for.

Maybe even more important, all that writing and other stuff works with quite narrow deadlines, which on the one hand is a blessing for a world class procrastinator like myself, but on the other hand, it also is the main cause of me putting the urgent (the work with the short deadlines) before the important (the work that I really feel needs to be done). Because there’s always another blog post or article to write, or a call to do, there’s no time for the deep work required to create some of the things I want to create.

Some of you might know what I’m referring to, because I’ve been in touch with a number of people to discuss the idea: I’d love to do something about the way test automation is taught in courses. See this blog post or a more elaborate explanation of what I mean by that statement. But creating a course that covers everything I think should be covered will not be easy, nor will it be fast. But since it’s something I feel so strongly about, it’s worth it to carve out the required time from my schedule. Even if that does mean disappointing some people, or saying ‘no’ to requests or invitations.

And there’s another, albeit smaller, course related to automation (and to Selenium in particular) that I’d like to see published as well. So there’s time needed for that too. That means I’ll have to be a little more careful with my time and planning, something I never really had to do before. So, in a way, this will be a good lesson for me as well.

I’ll still try to write a blog post every week, but if needs must, I might have to break that promise to myself as well. I’ll keep you posted.

My thoughts on who should create automation, and why there might be a more urgent problem at hand

Recently, there has been quite a bit of discussion on Twitter on what role should be responsible for writing test automation code. From what I’ve read, there are roughly two camps:

  • The camp advocating that developers should be made responsible for writing most, if not all automation code. Their main reason: developers know best how to code, they also know best how the code is structured and works internally, providing them with the best insight into which hooks in the code to leverage for automation (as opposed to blindly doing everything through the user interface).
  • The camp advocating that test automation requires dedicated automation engineers, especially for anything above the unit testing level. Their main reason: it takes specific skills to write and maintain good automation, skills that extend further than ‘just’ development skills.

At the end of last year, I published a blog post on roughly this topic. Rereading it, I still agree with my own opinion I described back then (which is a good thing!), but having thought and read about and discussed this topic some more in the last months, there are some subtle nuances (and one maybe not so subtle one) I’d like to make. Which, in turn, makes for a good topic for another blog post.

First of all, looking back at the question of ‘Who should be responsible for creating automation?’, I think the correct answer is not ‘developers’ or ‘test automation engineers’. The correct answer to this question should be ‘the development team’. This includes developers, automation engineers, testers, designers, the works. I think that deep down, everybody agrees on this (save maybe a grumpy old fashioned developer that thinks writing automation code is way below his standard). A slightly (yet also very) different question here is ‘Who should be primarily tasked with creating automation?’. That’s where the two camps I mentioned above diverge.

One of the catalysts of the recent discussion on this topic is this blog post from Alan Page. His blog post was based on a series of Tweets Alan sent, which were in turn extensively annotated by Richard Bradshaw in another blog post. Whether or not you agree with their respective standpoints, I think both blog posts are recommended reading material for anybody working in the test automation space. Most of you will probably have read it already, but if you don’t, make sure you do.

My opinion? In an ideal world, where developers have the time, the knowledge as well as the drive that’s required to create useful automation, the dedicated automation engineer might be going the way of the dinosaur. Personally, I don’t see that happening in the foreseeable future, though. And this opinion is backed up by what I see with most of the organizations I’ve worked with over the past year and a half (10+ in numbers, in case you’re wondering). In most teams, developers either lack the time (mostly a bad excuse), drive (also a bad excuse) or knowledge (this is excusable but should be fixed anyway) to concern themselves with creating automation. The same goes for their testers.

As a result, they rely upon their own automation engineers to help them create, run and maintain their automation, or they hire someone from outside the organization. This is where I often come in, either to create the automation myself and learning employees how to extend, run and maintain it, or in a mentoring or coaching role, where I observe and help teams define their own automation strategy. And the number of projects that I see advertised (either directly to me or on freelance job boards and email lists) does not indicate a decline in the need for dedicated automation engineers either. Rather the contrary.

In the end, though, it does not (yet) matter to me WHO is tasked with creating and maintaining automation. Even stronger put, I don’t think it’s the most important problem (or discussion) that needs to be tackled with regards to automation, at the moment. Instead, I’d love to see more discussion, teaching and mentoring on what constitutes good automation, and how to implement automation in a way so that it is maintainable, reliable and valuable in the long run.

I don’t know if it’s just the time of the year, or the fact that I keep getting passed exactly the wrong (or right, depending on how you look at it) code bases, but in the last couple of weeks I’ve witnessed some truly horrifying pieces of automation cr*p. Selenium scripts where every third line was a Thread.sleep(). Cucumber step definition methods containing Selenium object locators. Horrible variable names (what in the name of Peter is ‘y’ supposed to tell me?). Writing a new Selenium test case for every possible combination of input and output parameters, even though the sequence of sendKeys() and click() actions stays exactly the same. And much more of such goodness.

Arguably the biggest gripe I have with this: these abominations were created by external consultants. Who had been working on them for months. And probably got paid a handsome hourly fee by their (and my) client for their efforts. That makes the problem at hand twofold:

  • Bad thing: there are too many self proclaimed ‘automation consultants’, ‘architects’, even the ‘senior’, ‘principal’, or Peter knows what else versions of them, that couldn’t write a decent test if their life depended on it.
  • Even worse thing: their clients don’t have the time or knowledge (probably the latter) to take a look and recognize what absolute garbage those expensive ‘consultants’ deliver.

Now that software development and delivery cycles are speeding up ever more, and teams are increasingly relying on automation to safeguard the quality of the releases they’re putting out into the world, it’s becoming due time to do something about this. Educate both the people responsible for creating automation and the teams that rely on their efforts about what constitutes good automation, and give them the tools to monitor and act upon automation quality. If we as test automation crafts(wo)men don’t start doing this sooner rather than later, I’m afraid that crappy automation becomes the new bottleneck in modern software development, just like all that testing was at the end of waterfall projects in times past.

Once we’ve tackled that problem, let’s move on to who’s the best fit to write what we agree upon is good automation.