My thoughts on who should create automation, and why there might be a more urgent problem at hand

Recently, there has been quite a bit of discussion on Twitter on what role should be responsible for writing test automation code. From what I’ve read, there are roughly two camps:

  • The camp advocating that developers should be made responsible for writing most, if not all automation code. Their main reason: developers know best how to code, they also know best how the code is structured and works internally, providing them with the best insight into which hooks in the code to leverage for automation (as opposed to blindly doing everything through the user interface).
  • The camp advocating that test automation requires dedicated automation engineers, especially for anything above the unit testing level. Their main reason: it takes specific skills to write and maintain good automation, skills that extend further than ‘just’ development skills.

At the end of last year, I published a blog post on roughly this topic. Rereading it, I still agree with my own opinion I described back then (which is a good thing!), but having thought and read about and discussed this topic some more in the last months, there are some subtle nuances (and one maybe not so subtle one) I’d like to make. Which, in turn, makes for a good topic for another blog post.

First of all, looking back at the question of ‘Who should be responsible for creating automation?’, I think the correct answer is not ‘developers’ or ‘test automation engineers’. The correct answer to this question should be ‘the development team’. This includes developers, automation engineers, testers, designers, the works. I think that deep down, everybody agrees on this (save maybe a grumpy old fashioned developer that thinks writing automation code is way below his standard). A slightly (yet also very) different question here is ‘Who should be primarily tasked with creating automation?’. That’s where the two camps I mentioned above diverge.

One of the catalysts of the recent discussion on this topic is this blog post from Alan Page. His blog post was based on a series of Tweets Alan sent, which were in turn extensively annotated by Richard Bradshaw in another blog post. Whether or not you agree with their respective standpoints, I think both blog posts are recommended reading material for anybody working in the test automation space. Most of you will probably have read it already, but if you don’t, make sure you do.

My opinion? In an ideal world, where developers have the time, the knowledge as well as the drive that’s required to create useful automation, the dedicated automation engineer might be going the way of the dinosaur. Personally, I don’t see that happening in the foreseeable future, though. And this opinion is backed up by what I see with most of the organizations I’ve worked with over the past year and a half (10+ in numbers, in case you’re wondering). In most teams, developers either lack the time (mostly a bad excuse), drive (also a bad excuse) or knowledge (this is excusable but should be fixed anyway) to concern themselves with creating automation. The same goes for their testers.

As a result, they rely upon their own automation engineers to help them create, run and maintain their automation, or they hire someone from outside the organization. This is where I often come in, either to create the automation myself and learning employees how to extend, run and maintain it, or in a mentoring or coaching role, where I observe and help teams define their own automation strategy. And the number of projects that I see advertised (either directly to me or on freelance job boards and email lists) does not indicate a decline in the need for dedicated automation engineers either. Rather the contrary.

In the end, though, it does not (yet) matter to me WHO is tasked with creating and maintaining automation. Even stronger put, I don’t think it’s the most important problem (or discussion) that needs to be tackled with regards to automation, at the moment. Instead, I’d love to see more discussion, teaching and mentoring on what constitutes good automation, and how to implement automation in a way so that it is maintainable, reliable and valuable in the long run.

I don’t know if it’s just the time of the year, or the fact that I keep getting passed exactly the wrong (or right, depending on how you look at it) code bases, but in the last couple of weeks I’ve witnessed some truly horrifying pieces of automation cr*p. Selenium scripts where every third line was a Thread.sleep(). Cucumber step definition methods containing Selenium object locators. Horrible variable names (what in the name of Peter is ‘y’ supposed to tell me?). Writing a new Selenium test case for every possible combination of input and output parameters, even though the sequence of sendKeys() and click() actions stays exactly the same. And much more of such goodness.

Arguably the biggest gripe I have with this: these abominations were created by external consultants. Who had been working on them for months. And probably got paid a handsome hourly fee by their (and my) client for their efforts. That makes the problem at hand twofold:

  • Bad thing: there are too many self proclaimed ‘automation consultants’, ‘architects’, even the ‘senior’, ‘principal’, or Peter knows what else versions of them, that couldn’t write a decent test if their life depended on it.
  • Even worse thing: their clients don’t have the time or knowledge (probably the latter) to take a look and recognize what absolute garbage those expensive ‘consultants’ deliver.

Now that software development and delivery cycles are speeding up ever more, and teams are increasingly relying on automation to safeguard the quality of the releases they’re putting out into the world, it’s becoming due time to do something about this. Educate both the people responsible for creating automation and the teams that rely on their efforts about what constitutes good automation, and give them the tools to monitor and act upon automation quality. If we as test automation crafts(wo)men don’t start doing this sooner rather than later, I’m afraid that crappy automation becomes the new bottleneck in modern software development, just like all that testing was at the end of waterfall projects in times past.

Once we’ve tackled that problem, let’s move on to who’s the best fit to write what we agree upon is good automation.

Defining Continuous Testing for myself

On a couple of recent occasions, I found myself talking about Continuous Testing and how test automation related to this phenomenon (or buzzword, if you prefer that term). However, to this day I didn’t have a decent answer to the question of what Continuous Testing (CT) is and how exactly it relates to test automation. Now that I’m busy preparing another webinar (this time together with the people at Testim, but more on that probably in another post), and we find ourselves again talking about CT, I thought it was due time to start carving out a definition for myself. This blog post is much like me thinking out loud, so bear with me.

To start, CT is definitely not equal to test automation, not even to test automation on steroids (i.e., test automation that is actually fast, stable and repeatable. Instead, I see CT as an integrated part of the Continuous Delivery (CD) life cycle:

Continuous Testing as part of the Continuous Delivery life cycle

You could also say that CT is a means of that teams can adopt to support CD while aiming to deliver quality software.

Let’s take a closer look and dissect the term ‘Continuous Testing’. The first part, ‘Continuous’, to me is a term that means two things in the CD context:

  1. The software that is being created is continuously tested (or, more likely, continually) in a given environment. In other words, any version of the software that enters an environment is immediately subjected to the tests that are associated with that environment. This environment can be a local development environment, a test environment, or even a production environment (where testing is often done in the form of monitoring).
  2. The software that is being created is continuously tested as it passes through environments. In other words: there’s no deploy that is not being followed by some form of testing, and in case of the local development environment, tests are run before any deployment (or better, commit and build) is being done.

Continuous Testing in two dimensions

This is not necessarily different from any other software delivery method, but what makes CT (and CD) stand out is that the time between deployments is typically very short. And that’s where the second part of the term Continuous Testing comes into play: ‘Testing’. How is this testing being done? This is where automation often comes into play, as an enabler of fast feedback and software moving through the pipeline fast, yet in a controlled manner.

Teams that want to ‘do’ CT and CD simply cannot be blocked by testing as an activity tacked on at the end. Instead, CT requires a shift of mind from traditional testing as an afterthought to testing being ingrained throughout the pipeline. Formalized handoffs and boundaries between environments will have to be replaced by testing activities that act as gatekeepers and safety nets. And where necessary and useful, this testing is supported by tools. In this respect, test automation can be an enabler of Continuous Testing. But (as is so often the case), only if that automation makes sense. Again, I refer to this blog post if you want to know what I mean by ‘making sense’ in a CT context.

I still don’t have a one line, encyclopedia-style definition for Continuous Testing, but at least the thought process I went through to write this (short) blog post helped me put some things into place. Next to Katrina Clokie’s book on testing in DevOps, the following articles have been a source of information for me (while being much better written than these ramblings):

What is continuous testing? Know the basics of this core safety net for DevOps teams
A real-world guide to continuous testing
What is Continuous Testing?
Continuous Testing vs. test automation (whitepaper)
The Great Debate: Automated Testing vs Continuous Testing

What is your definition of Continuous Testing? Do you have one?

Why I think automation education is broken (and what I’ll try and do about it)

I’ve written various blog posts about test automation craftsmanship recently, a topic that is becoming dearer to me every time I see people posing automation-related statements or questions that are, at the very least, of questionable quality. Like in ye olde times, craftsmanship isn’t something that is easily attained, or can be attained at all, without proper education and mentorship. And that’s where I think the test automation world is still lacking. Or, to put it in positive terms: there’s room for improvement in this respect.

And I’m not alone in this. I had a couple of good discussions on Twitter the last couple of weeks (yes, this is possible!), most notably an insightful exchange of messages with Matt Heusser (not sure if you’re reading this, but anyway, thanks Matt!), on the current state of automation training and how it is advertised. The gist of it (and note that this is my take on it):

  1. There is an (over)abundance of tool-centered training out there. This is not necessarily a problem, but there is definitely room for more broader training on the fundamentals of test automation and how it should be applied.
  2. A lot of this tool-centered training is advertised as ‘Become an expert in tool XYZ in just three days’. This IS a problem. First of all, I don’t think it is possible to become an expert in any significant tool, approach or anything in the test automation space in just a couple of days. It’s possible to become familiar with the API and features of a tool, but that hardly makes you an expert. Expertise comes with application, failing, studying, learning, etc. It takes months, sometimes years, not days.

The second point is also dangerous in that it can lead to an army of self proclaimed ‘experts’ that are really nothing more than people with hammers that see only nails on their path. Not an image I have in mind when I think about what constitutes being a test automation expert.

What is lacking, in my opinion, is something that gives people involved in test automation a solid foundation of knowledge about the field, its challenges and its place in the larger software development space. Something that goes beyond the specifics of individual tools. Something that talks some sense into the people crying ‘automate all the things’, so to say. And by ‘people’, I don’t just mean automation engineers, but developers, scrum masters, POs, managers, CxO-level people, everybody that is a test automation stakeholder and should therefore care about what applying automation in a sensible way can bring to software development.

So, what to do? Ranting about how things are broken is one thing (and I must admit that it DOES feel good to me), but I’ve been thinking about and saying the above for a while now. So maybe it’s time to start to do something about it. That’s why I’ve started to outline a course that I think should be able to fill the void when it comes to education around test automation. Call it ‘Test automation awareness’, call it ‘Automation 101’, call it whatever you like, I’m still open to suggestions as to the name of the course. Point is, it’s time to put my money where my mouth is. I’ve already reached out to some people and received some awesome feedback (thanks guys, you know who you are). Funny thing, a couple of people I reached out to said they were working on something similar. Which is even better, as this confirms my view that there is a need for a course like this.

I’m not sure at the moment when this will go live, and in what form exactly, but as soon as there’s more to disclose, I’ll do it here. If you’d like to give input, constructive criticism and/or contribute in some other way, please send me a note at bas@ontestautomation.com and I’ll get back to you. I’m very much looking forward to making this a thing, although not so much to the work that’s ahead of me. But I feel it’s important enough to get done.

On a not totally unrelated note, I’ve also recently had a very fruitful discussion with someone from an academic research facility, and if it’ll all work out, it looks like I’ll be somewhat closer involved in one of their projects as well. This might also be a good place to start infiltrating the education system and see that test automation earns a better place in higher education as well. I don’t have the illusion that I’ll change the world overnight in this respect, but you have to start somewhere, right? And if anything it’ll be a good opportunity for me to step a little outside of my comfort zone again.

I’ll keep you posted.

P.S.: Most of you will have heard or read about the fact that Katrina Clokie’s book ‘A Practical Guide To Testing In DevOps’ has been released through LeanPub. I’ve just finished reading it, and the only thing I can say is that if you’re even remotely interested in testing or DevOps, I’d highly recommend you to buy a copy. It’s chock full of tips and case studies for everybody, tester or not, facing the challenge of keeping up with DevOps and with the rapidly increasing speed of software delivery in general, without forgetting to keep an eye on software quality.