First steps as a test automation coach

“We want our development teams to take the next step towards adopting Continuous Delivery by giving their test automation efforts a boost.”

That was the task I was given a couple of months ago when I started a new project, this one for a well-known media company here in the Netherlands. Previously, I’ve mainly been involved in more hands-on test automation implementation projects, meaning I was usually the one designing and implementing the test automation solution (either alone or as part of a team). For this project, however, my position would be much different:

  • There were multiple development teams to be supported (the exact number changed a couple of times during the assignment, but there were at least four at any time), meaning there was no way I was able to spend enough time on the implementation of automated tests for any of those teams.
  • This was a part time assignment, since I only had 2 days per week available due to other commitments. This made it even less possible to get involved in any serious test automation implementation myself.
  • Each development team was responsible for its own line of products and could make their own decisions on the technology stack to be used (within a certain bandwidth), most of which I’ve never worked with or even heard of before (GraphQL, for example), making it even less feasible to contribute to any actual tests.

Instead, at the start of the project, we decided that I would act more as a test automation coach of sorts, leaving the creation of the test automation to the development teams (which made perfect sense for this client). Something I’d never done before, so the fact that I was given the chance to do so was a pleasant surprise. Normally, as a contractor, I’m only hired to do stuff I’m already experienced in, but I guess that through my resume and the interview I built enough trust for them to hire me for the job anyway. I’m very grateful for that.

So, what did I do?

Kickoff
As the development teams consisted of about 40 developers in total, with a wide range of levels of experience, background and preferences in technology and programming languages, we (the hiring manager, the client’s team of testers and myself) thought it would be a good idea to get them at least somewhat on the same level with regards to the concept of test automation. We did this by organizing a number of test automation awareness sessions, in which I presented my view on test automation. The focus of this presentation was mostly on the ‘why?’ and the ‘what?’ of it, because I quickly figured out that the developers themselves were perfectly capable of figuring out the ‘how?’ (an impression I get from a lot of my clients nowadays, by the way).

Taking inventory of test automation maturity
Next up was a series of interviews with all tech leads from the development teams, to see where they stood with regards to test automation, what they already did, what was lacking and what would be a good ‘next step’ to allow that team to make the transition towards Continuous Delivery. This information was shared across teams to promote knowledge sharing. You’d be surprised to find out how often teams are struggling with something that’s already been solved by another teams just a couple of yards away, without either party knowing of the situation of the other..

Test automation hackathons
The most important and most impactful part of my assignment was organizing a two-day ‘hackathon’ (for lack of a better word) for each of the teams (one team at a time). The purpose of this hackathon was to take the team away from the daily grind of developing and delivering production code and have them work on their technical debt with regards to test automation. The rules of the game:

  • Organize the hackathon in a space separate from the work floor, both to give the team the feeling that they were removed from the usual work routine as well as prevent outside interference as much as possible.
  • Organize the hackathon as a Scrum sprint of sorts, with a kickoff/planning session, show and tell/standup twice a day and a demo session and retrospective at the end.
  • Deliver working software, meaning that I’d rather have one test that works and is fully integrated into the build and deployment process than fifty tests that do not run automatically. The most difficult hurdles are never in creating more tests, once you’ve got the groundwork taken care of.
  • Focus on a subject that the teams wants to have, but does not currently have. For some teams, this was unit testing for a specific type of software, for some it was end-to-end testing, or build pipelines, and in one case production monitoring. The subject didn’t matter, as long as it had to do with software quality and it was something the team did not already do.

Results
The hackathons worked out really well. Monitoring the teams after they had completed their ‘two days of test automation’ I could see they had indeed taken a step in the right direction, being more active on the test automation front and having a better awareness of what they were working towards. Mission accomplished!

As my assignment ended after that, I can’t say anything about the long term effects, unfortunately, but I’m convinced that the testers themselves can take over the role of test (automation) coach perfectly well. I will stay in touch with the client regularly to see how they’re doing, of course.

What did I learn?
As I said, this was my first time acting more as a coach than as an engineer, so naturally I learned a lot of things myself:

  • Hackathons are a great way of improving test automation efforts for development teams. Pulling teams away from their daily grind and having them focus on improving their automation efforts is both useful and fun. I was lucky that management support was not an issue, so your mileage may vary, but my point stands.
  • I (think I) have what it takes to be a test automation coach. This was the biggest breakthrough for me personally. As a pretty introverted person who likes to play around with tools regularly, it was hard initially to step away from the keyboard and fight the urge to create tests myself, helping other people to become better at it instead. It IS the way forward for my career, though, I think, because I’ve yet again seen that there’s no one better at creating automated tests than a (good) developer. What I can bring to the table is experience and guidance as to the ‘why?’ and ‘what?’.
  • Part time projects are great in terms of flexibility, especially when you find yourself in a coaching role. You can organize a hackathon, give teams guidance points and suggestions what to work on, and come back a couple of days later and see how they’re doing, evaluate, discuss and let them take the next step.

In short, my first adventure as a test automation coach has been a great experience. I’m looking forward to the next one!

On elegance

For the last couple of weeks, I’ve been reading up on the works of one of the most famous Dutch computer scientists of all time: Edsger W. Dijkstra. Being from the same country and sharing a last name (we’re not related as far as I know, though), of course I’ve heard of him and the importance of his work to the computer science field, but it wasn’t until I saw a short documentary on his life (it is in Dutch, so probably useless to most of you) that I got interested in his work. Considering the subject of this blog, his views on software testing are of course what interests me the most, but there is more..

Most of you have probably heard what is perhaps his best known quote related to software testing:

Program testing can be used to show the presence of bugs, but never to show their absence.

However, that’s not what I wanted to write about in this blog post. Dijkstra has left his legacy by means of a large number of manuscripts, collectively known as the EWDs (named for the fact that he signed them with his initials). For those of you that have a little spare time left, you can read all transcribed EWDs here.

I was particularly triggered by a quote of Dijkstra that featured in the documentary, and which is included in EWD 1284:

After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure.

While Dijkstra was referring to solutions to computer science-related problems in general, and to software development and programming structures in particular, it struck me that this quote applies quite well to test automation too.

Here’s the definition of ‘elegance’ from the Oxford Dictionary:

  1. The quality of being graceful and stylish in appearance or manner.
  2. The quality of being pleasingly ingenious and simple.

To me, this is what defines a ‘good’ solution to any test automation problem: one that is stylish, ingenious and simple. Too often, I’m seeing solutions proposed (or answers to questions given) that are overly complicated, unstructured, not well-thought through or just plain horrible. And, granted, I’ve been guilty of doing the same myself, and I’m probably still doing so from time to time. But as of now, I know at least one of the characteristics a good solution should adhere to: elegance.

Dijkstra also gives two typical characteristics of elegant solutions:

An elegant solution is shorter than most of its alternatives.

This is why I like tools such as REST Assured and WireMock so much: the code required to define tests c.q. mocks in these tools is short, readable and powerful: elegance at work.

An elegant solution is constructed out of logically divided components, each of which can be easily swapped out for an alternative without influencing the rest of the solution too much.

This is a standard that every test automation solution should be held up to: if any of the components fails, is no longer maintained or is being outperformed by an alternative, how much effort does it require to swap out that component?

As a last nugget of wisdom from Dijkstra, here’s another of his quotes on elegance:

Elegant solutions are often the most efficient.

And isn’t efficiency something we all should strive for when creating automated tests? It is definitely something I’ll try and pay more attention to in the future. Along with effectiveness, that is, something that’s maybe even more important.

However, according to my namesake, I am (and probably most of you are) failing to meet his standards in all of the projects that we’re working on. As he states:

For those who have wondered: I don’t think object-oriented programming is a structuring paradigm that meets my standards of elegance.

Ah well..

Trust automation

By now, most people will be aware that the primary goal of test automation is NOT ‘finding bugs’. Sure, it’s nice if your automated tests catch a bug or two that would otherwise find its way into your production environment, but until artificial intelligence in test automation takes off (by which I mean REALLY takes off), testers will be way more adept at finding bugs than even the most sophisticated of automated testing solutions.

No, the added value of test automation is in something different: confidence. From the Oxford dictionary:

Confidence: Firm belief in the reliability, truth, or ability of someone or something.

A set of good automated tests will instill in stakeholders (including but definitely not limited to testers) the confidence that a system under test produces output or performs certain functions as previously specified (whether that’s also the intended way a system should work is a whole different discussion). Even after possibly rounds and rounds of changes, fixes, patches, refactorings and other updates.

This confidence is established by trust:

Trust: The feeling or belief that one can have faith in or rely on someone or something.

Until recently, I wasn’t aware of the subtle difference between trust and confidence. It doesn’t really help either that both terms translate to the same word in Dutch (‘vertrouwen’). Then I saw this explanation by Michalis Pavlidis:

Both concepts refer to expectations which may lapse into disappointments. However, trust is the means by which someone achieves confidence in something. Trust establishes confidence. The other way to achieve confidence is through control. So, you will feel confident in your friend that he won’t betray you if you trust him or/and if you control him.

Taking this back to software development, and to test automation in particular: confidence in the quality of a piece of software is achieved by control over its quality or by trust therein. As we’re talking about test automation here, more specifically the automated execution of predefined checks, I think it’s safe to say that we can forget about test automation being able to actively control the quality of the end product. That leaves the instilling of trust (with confidence as a result) in the system under test as a prime goal for test automation. It’s generally a bad idea to make automated checks the sole responsible entity for creating and maintaining this trust, by the way, but when applied correctly, they can certainly contribute a lot.

However, in order to enable your automated checks to build this trust, you need to trust the automated checks themselves as well. Simply (?) put: I wouldn’t be confident in the quality of a system if that quality was (even partially) assessed by automated checks I don’t trust.

Luckily, there are various ways to increase the trust in your automated checks. Here are three of them:

  • Run your automated checks early and often to see if they are fast, stable and don’t turn out to be high-maintenance drama queens.
  • Check your checks to see if they still have the trust-creation power you intended them to have when you first created them. Investigate whether applying techniques such as mutation testing can be helpful when doing so.
  • Have your automated checks communicate both their intent (what is it that is being checked?) and their result (what was the result? what went wrong?) clearly and unambiguously to the outside world. This is something I’ll definitely investigate and write about in future posts, but it should be clear that the value of automated checks that are unclear as to their intent and their result are not providing the value they should.

Test automation is great, yet it renders itself virtually worthless if it can’t be trusted. Make sure that you trust your automation before you use it to build confidence in your system under test.