Why I think automation education is broken (and what I’ll try and do about it)

I’ve written various blog posts about test automation craftsmanship recently, a topic that is becoming dearer to me every time I see people posing automation-related statements or questions that are, at the very least, of questionable quality. Like in ye olde times, craftsmanship isn’t something that is easily attained, or can be attained at all, without proper education and mentorship. And that’s where I think the test automation world is still lacking. Or, to put it in positive terms: there’s room for improvement in this respect.

And I’m not alone in this. I had a couple of good discussions on Twitter the last couple of weeks (yes, this is possible!), most notably an insightful exchange of messages with Matt Heusser (not sure if you’re reading this, but anyway, thanks Matt!), on the current state of automation training and how it is advertised. The gist of it (and note that this is my take on it):

  1. There is an (over)abundance of tool-centered training out there. This is not necessarily a problem, but there is definitely room for more broader training on the fundamentals of test automation and how it should be applied.
  2. A lot of this tool-centered training is advertised as ‘Become an expert in tool XYZ in just three days’. This IS a problem. First of all, I don’t think it is possible to become an expert in any significant tool, approach or anything in the test automation space in just a couple of days. It’s possible to become familiar with the API and features of a tool, but that hardly makes you an expert. Expertise comes with application, failing, studying, learning, etc. It takes months, sometimes years, not days.

The second point is also dangerous in that it can lead to an army of self proclaimed ‘experts’ that are really nothing more than people with hammers that see only nails on their path. Not an image I have in mind when I think about what constitutes being a test automation expert.

What is lacking, in my opinion, is something that gives people involved in test automation a solid foundation of knowledge about the field, its challenges and its place in the larger software development space. Something that goes beyond the specifics of individual tools. Something that talks some sense into the people crying ‘automate all the things’, so to say. And by ‘people’, I don’t just mean automation engineers, but developers, scrum masters, POs, managers, CxO-level people, everybody that is a test automation stakeholder and should therefore care about what applying automation in a sensible way can bring to software development.

So, what to do? Ranting about how things are broken is one thing (and I must admit that it DOES feel good to me), but I’ve been thinking about and saying the above for a while now. So maybe it’s time to start to do something about it. That’s why I’ve started to outline a course that I think should be able to fill the void when it comes to education around test automation. Call it ‘Test automation awareness’, call it ‘Automation 101’, call it whatever you like, I’m still open to suggestions as to the name of the course. Point is, it’s time to put my money where my mouth is. I’ve already reached out to some people and received some awesome feedback (thanks guys, you know who you are). Funny thing, a couple of people I reached out to said they were working on something similar. Which is even better, as this confirms my view that there is a need for a course like this.

I’m not sure at the moment when this will go live, and in what form exactly, but as soon as there’s more to disclose, I’ll do it here. If you’d like to give input, constructive criticism and/or contribute in some other way, please send me a note at bas@ontestautomation.com and I’ll get back to you. I’m very much looking forward to making this a thing, although not so much to the work that’s ahead of me. But I feel it’s important enough to get done.

On a not totally unrelated note, I’ve also recently had a very fruitful discussion with someone from an academic research facility, and if it’ll all work out, it looks like I’ll be somewhat closer involved in one of their projects as well. This might also be a good place to start infiltrating the education system and see that test automation earns a better place in higher education as well. I don’t have the illusion that I’ll change the world overnight in this respect, but you have to start somewhere, right? And if anything it’ll be a good opportunity for me to step a little outside of my comfort zone again.

I’ll keep you posted.

P.S.: Most of you will have heard or read about the fact that Katrina Clokie’s book ‘A Practical Guide To Testing In DevOps’ has been released through LeanPub. I’ve just finished reading it, and the only thing I can say is that if you’re even remotely interested in testing or DevOps, I’d highly recommend you to buy a copy. It’s chock full of tips and case studies for everybody, tester or not, facing the challenge of keeping up with DevOps and with the rapidly increasing speed of software delivery in general, without forgetting to keep an eye on software quality.

On crossing the bridge into unit testing land

Maybe it’s just the people and organizations I meet and work with, but no matter how active they’re trying to implement automation and involve testers therein, there’s one bridge that’s often too far for those that are tasked with test automation, and that’s the bridge to unit testing land. When asking them for the reasons that testers aren’t involved in unit testing, I typically get one (or more, or all) of the following answers:

  • ‘That’s the responsibility of our developers’
  • ‘I don’t know how to write unit tests’
  • ‘I’m already busy with other types of automation and I don’t have time for that’

While these answers might sound perfectly reasonable to some, I think there’s something inherently wrong with all of them. Let’s take a look:

  • With more and more teams becoming multidisciplinary, we can’t simply shift responsibility for any task to a specific subgroup. If ‘we’ (i.e., the testers) keep saying that unit testing is a developer’s responsibility, we’ll never get rid of the silos we’re trying to break down.
  • While you might not know how to actually write unit tests yourself, there’s a lot you CAN do to contribute to their value and effectiveness. Try reviewing them, for example: has the developer of the unit test missed some obvious cases?
  • Not having time to concern yourself with unit testing reminds me of the picture below. Really, if something can be covered with a decent set of unit tests, there really is no need to write integration or even (shudder) end-to-end tests for it.

Are you too busy to pay attention to unit testing?

I’m not a devotee of the test automation pyramid per se, but there IS a lot of truth to the concept that a decent set of unit tests should be the foundation of every solid test automation effort. Unit tests are relatively easy to write (even though it might not look that way to some), they run fast (no need for waiting until web pages are loaded and complete their client-side processing, for example..) and therefore they’re the best way to provide that fast feedback that development teams are looking for those in this age of Continuous Integration / Delivery / Deployment / Testing / Everything / … .

To put it in even bolder terms, as a tester, I think you have the responsibility of familiarizing yourself with the unit testing activities that your development team is undertaking. Offer to review them. Try to understand what they do, what they check and where coverage could be improved. Yes, this might require you to actually talk to your developers! But it’ll be worth it, not just to you, but to the whole team and, in the end, also to your product and your customers. Over time, you might even write some unit tests yourself, though, again, that’s not a necessity for you to provide value in the land of unit testing. Plus, you’ll likely learn some new tricks and skills by doing so, and that’s always a good thing, right?

For those of you looking for another take on this subject, John Ruberto wrote an article called ‘100 percent unit test coverage is not enough‘, which was published on StickyMinds. A highly recommended read.

P.S.: Remember Tesults, the SaaS solution for storing and displaying test results I wrote about a couple of months ago? The people behind Tesults recently let me know they now offer a free forever plan as well. So if you were interested in using their services but could not afford or justify the investment, it might be a good idea to check their new plan out here. And again, I am in no way, shape or form associated with, nor do I have a commercial interest in Tesults as an organization or a product. I still think it’s a great platform, though.

Tackle the hard problems first

When you’re given the task of creating a new test automation solution (or expanding upon an existing one, for that matter), it might be tempting to start working on creating tests right away. The more tests that are automated, the more time is freed to do other things, right? For any test automation solution to be successful and, more importantly, scalable, you’ll need to take care of a number of things, though. These may not seem to directly contribute to the coverage you’re getting with your automated tests, but failing to address them from the beginning will likely cause you some pretty decent headaches later on. So why not deal with them while things are still manageable?

Run tests from within a CI pipeline
If you’re like me, you’ll want to able to run tests on demand, which could be either on a scheduled or on a per-commit basis. You likely do not accept the ‘works on my machine’ attitude from a developer, do you? Also, I’ve been ranting on about how test automation is software development and should be treated as such, so start doing so! Have your tests uploaded to a build server and run them from there. This will ensure that there’s no funny stuff left in your test setup, such as references to absolute (or even relative) file paths that only exist on your machine, references to dependencies that are not automatically resolved, and so on. Also, from my experience, especially user interface-driven tests always seem to behave a little differently with regards to timing and syncing when run from a build server, so running them from there is the ultimate proof that they’re stable and can be trusted.

Take care of error handling and stability
Strongly related to the previous one is taking care of stabilizing your test execution and handling errors, both foreseen and unforeseen. This applies mainly to user interface-driven tests, but other types of automated tests should not be exempt of this. My preferred way of implementing error handling is by means of wrapper methods around API calls (here’s an example for Selenium). Don’t be tempted to skip this part of implementing tests and ‘make do’ with less than optimal error handling. The risk of spending a lot of time getting it right later on, having to implementing error handling in more than one place and spending a lot of time finding out why in the name of Pete your test run failed exactly are just too high. Especially when you stop running tests on your machine only, which, again, you should do as soon as humanly possible.

Have a solid test data strategy
In my experience, one of the hardest problems to get right when it comes to creating automated tests is managing the test data. The more you move towards end-to-end tests, and the more (external) dependencies and systems involved, the harder this is to get right. But even a system that’s operating in a relatively independent manner (and these are few and far between) can cause some headaches, simply because their data model can be so complex. No matter what the situation is you’re dealing with, having the right test data available at the right time (read: all the time) can be very hard to accomplish. And therefore this problem should be tackled as soon as possible. The earlier you think of a solid strategy, the more you’ll benefit from it in the long run. And don’t get complacent when everything is a-ok right now. There’s always the possibility that you’re simply working on those tests that do not yet need a better thought out test data approach, but that doesn’t mean you’re safe forever!

Get your test environment in order
With all these new technologies like containers and virtual machines readily available, I’m still quite surprised to see how hard it is for some organizations to get a test environment set up. I’m still seeing this taking weeks, sometimes. And then the environment is unavailable for hours or even days for every new deployment. Needless to say that’s very counter-effective when you want to be able to run your automated tests on demand. So my advice would be to try and get this sorted out as soon as possible. Make sure that everybody knows that having a reliable and available test environment is paramount to the success of test automation. Because it is. And since all modern systems are increasingly interconnected and ever more depending on components both inside the organization and beyond those walls, don’t stop at getting a test environment for your primary application. Make sure that there’s a suitable test environment available and ready on demand for all components that your integration and end-to-end tests touch upon. Resort to service virtualization when there’s no ‘real’ alternative. Make sure that you can test whenever you want.

In the end, writing more automated tests is never the hard problem. Making sure that all things are in place that enable you to grow a successful test automation suite is. So, tackle that first, while your suite is still small, and increasing the coverage of your automated tests will become so much easier.