On including automation in your Definition of Done

Working with different teams in different organizations means that I’m regularly faced with the question of whether and how to include automation in the Definition of Done (DoD) that is used in Agile software development. I’m not an Agilist myself per se (I’ve seen too many teams get lost in overly long discussions on story points and sticky notes), but I DO like to help people and teams struggling with the place of automation in their sprints. As for the ‘whether’ question: yes, I definitely think that automation should be included in any DoD. The answer to the ‘how’ of including, a question that could be rephrased as the ‘what’ to include, is a little more nuanced.

For starters, I’m not too keen on rigid DoD statements like

  • All scenarios that are executed during testing and that can be automated, should be automated
  • All code should be under 100% unit test coverage
  • All automated tests should pass at least three consecutive times, except on Mondays, when they should pass four times.

OK, I haven’t actually seen that last one, but you get my point. Stories change from sprint to sprint. Impact on production code, be it new code that needs to be written, existing code that needs to be updated or refactored or old code that needs to be removed (my personal favorite) will change from story to story, from sprint to sprint. Then why keep statements regarding your automated tests as rigid as the above examples? Doesn’t make sense to me.

I’d rather see something like:

Creation of automated tests is considered and discussed for every story and their overarching epic and applied where deemed valuable. Existing automated tests are updated where necessary, and removed if redundant.

You might be thinking ‘but this cannot be measured, how do we know we’re doing it right?’. That’s a very good question, and one that I do not have a definitive answer for myself, at least not yet. But I am of the opinion that knowing where to apply automation, and more importantly, where to refrain from automation, is more of an art than a science. I am open to suggestions for metrics and alternative opinions, of course, so if you’ve got something to say, please do.

Having said that, one metric that you might consider when deciding whether or not to automate a given test or set of tests is whether or not your technical debt increases or decreases. The following consideration might be a bit rough, but bear with me. I’m sort of thinking out loud here. On the one hand, given that a test is valuable, having it automated will shorten the feedback loop and decrease technical debt. However, automating a test takes time in itself and increases the size of the code base to be maintained. Choosing which tests to automate is about finding the right balance with regards to technical debt. And since the optimum will likely be different from one user story to the next, I don’t think it makes much sense to put overly generalizing statements with regards to what should be automated in a DoD. Instead, for every story, ask yourself

Are we decreasing or increasing our technical debt when we automate tests for this story? What’s the optimum way of automating tests for this story?

The outcome might be to create a lot of automated tests, but it might also be to not automate anything at all. Again, all depending on the story and its contents.

Another take on the question whether or not to include automated test creation in your DoD might be to discern between the different scope levels of tests:

  • Creating unit tests for the code that implements your user story will often be a good idea. They’re relatively cheap to write, they run fast and thereby, they’re giving you fast feedback on the quality of your code. More importantly, unit tests act as the primary safety net for future development and refactoring efforts. And I don’t know about you, but when I undertake something new, I’d like to have a safety net just in case. Much like in a circus. I’m deliberately refraining from stating that both circuses and Agile teams also tend to feature a not insignificant number of clowns, so forget I said that.
  • You’ll probably also want to automate a significant portion of your integration tests. These tests, for example executed at the API level, can be harder to perform manually and are relatively cheap to automate with the right tools. They’re also my personal favorite type of automated tests, because they’re at the optimum point between scope and feedback loop length. It might be harder to write integration tests when the component you’re integrating with is outside of your team’s control, or does not yet exist. In that case, simulation might need to be created, which requires additional effort that might not be perceived as directly contributing to the sprint. This should be taken into account when it comes to adding automated integration tests to your DoD.
  • Finally, there’s the end-to-end tests. In my opinion, adding the creation of this type of tests to your DoD should be considered very carefully. They take a lot of time to automate (even with an existing foundation), they often use the part of the application that is most likely to change in upcoming sprints (the UI), and they contribute the least to shortening the feedback loop.

The ratio between tests that can be automated and tests for which it make sense to be automated in sprint can be depicted as follows. Familiar picture?

Should you include automated tests in your Definition of Done?

Please note that like the original pyramid, this is a model, not a guideline. Feel free to apply it, alter it or forget it.

Jumping back to the ‘whether’ of including automation in your DoD, the answer is still a ‘yes’. As can be concluded from what I’ve talked about here, it’s more of a ‘yes, automation should have been considered and applied where it provides direct value to the team for the sprint or the upcoming couple of sprints’ rather than ‘yes, all possible scenarios that we’ve executed and that can be automated should have been automated in the sprint’. I’d love to hear how other teams have made automation a part of their DoD, so feel free to leave a comment.

And for those of you who’d like to see someone else’s take on this question, I highly recommend watching this talk by Angie Jones from the 2017 Quality Jam conference:

On crossing the bridge into unit testing land

Maybe it’s just the people and organizations I meet and work with, but no matter how active they’re trying to implement automation and involve testers therein, there’s one bridge that’s often too far for those that are tasked with test automation, and that’s the bridge to unit testing land. When asking them for the reasons that testers aren’t involved in unit testing, I typically get one (or more, or all) of the following answers:

  • ‘That’s the responsibility of our developers’
  • ‘I don’t know how to write unit tests’
  • ‘I’m already busy with other types of automation and I don’t have time for that’

While these answers might sound perfectly reasonable to some, I think there’s something inherently wrong with all of them. Let’s take a look:

  • With more and more teams becoming multidisciplinary, we can’t simply shift responsibility for any task to a specific subgroup. If ‘we’ (i.e., the testers) keep saying that unit testing is a developer’s responsibility, we’ll never get rid of the silos we’re trying to break down.
  • While you might not know how to actually write unit tests yourself, there’s a lot you CAN do to contribute to their value and effectiveness. Try reviewing them, for example: has the developer of the unit test missed some obvious cases?
  • Not having time to concern yourself with unit testing reminds me of the picture below. Really, if something can be covered with a decent set of unit tests, there really is no need to write integration or even (shudder) end-to-end tests for it.

Are you too busy to pay attention to unit testing?

I’m not a devotee of the test automation pyramid per se, but there IS a lot of truth to the concept that a decent set of unit tests should be the foundation of every solid test automation effort. Unit tests are relatively easy to write (even though it might not look that way to some), they run fast (no need for waiting until web pages are loaded and complete their client-side processing, for example..) and therefore they’re the best way to provide that fast feedback that development teams are looking for those in this age of Continuous Integration / Delivery / Deployment / Testing / Everything / … .

To put it in even bolder terms, as a tester, I think you have the responsibility of familiarizing yourself with the unit testing activities that your development team is undertaking. Offer to review them. Try to understand what they do, what they check and where coverage could be improved. Yes, this might require you to actually talk to your developers! But it’ll be worth it, not just to you, but to the whole team and, in the end, also to your product and your customers. Over time, you might even write some unit tests yourself, though, again, that’s not a necessity for you to provide value in the land of unit testing. Plus, you’ll likely learn some new tricks and skills by doing so, and that’s always a good thing, right?

For those of you looking for another take on this subject, John Ruberto wrote an article called ‘100 percent unit test coverage is not enough‘, which was published on StickyMinds. A highly recommended read.

P.S.: Remember Tesults, the SaaS solution for storing and displaying test results I wrote about a couple of months ago? The people behind Tesults recently let me know they now offer a free forever plan as well. So if you were interested in using their services but could not afford or justify the investment, it might be a good idea to check their new plan out here. And again, I am in no way, shape or form associated with, nor do I have a commercial interest in Tesults as an organization or a product. I still think it’s a great platform, though.

Tackle the hard problems first

When you’re given the task of creating a new test automation solution (or expanding upon an existing one, for that matter), it might be tempting to start working on creating tests right away. The more tests that are automated, the more time is freed to do other things, right? For any test automation solution to be successful and, more importantly, scalable, you’ll need to take care of a number of things, though. These may not seem to directly contribute to the coverage you’re getting with your automated tests, but failing to address them from the beginning will likely cause you some pretty decent headaches later on. So why not deal with them while things are still manageable?

Run tests from within a CI pipeline
If you’re like me, you’ll want to able to run tests on demand, which could be either on a scheduled or on a per-commit basis. You likely do not accept the ‘works on my machine’ attitude from a developer, do you? Also, I’ve been ranting on about how test automation is software development and should be treated as such, so start doing so! Have your tests uploaded to a build server and run them from there. This will ensure that there’s no funny stuff left in your test setup, such as references to absolute (or even relative) file paths that only exist on your machine, references to dependencies that are not automatically resolved, and so on. Also, from my experience, especially user interface-driven tests always seem to behave a little differently with regards to timing and syncing when run from a build server, so running them from there is the ultimate proof that they’re stable and can be trusted.

Take care of error handling and stability
Strongly related to the previous one is taking care of stabilizing your test execution and handling errors, both foreseen and unforeseen. This applies mainly to user interface-driven tests, but other types of automated tests should not be exempt of this. My preferred way of implementing error handling is by means of wrapper methods around API calls (here’s an example for Selenium). Don’t be tempted to skip this part of implementing tests and ‘make do’ with less than optimal error handling. The risk of spending a lot of time getting it right later on, having to implementing error handling in more than one place and spending a lot of time finding out why in the name of Pete your test run failed exactly are just too high. Especially when you stop running tests on your machine only, which, again, you should do as soon as humanly possible.

Have a solid test data strategy
In my experience, one of the hardest problems to get right when it comes to creating automated tests is managing the test data. The more you move towards end-to-end tests, and the more (external) dependencies and systems involved, the harder this is to get right. But even a system that’s operating in a relatively independent manner (and these are few and far between) can cause some headaches, simply because their data model can be so complex. No matter what the situation is you’re dealing with, having the right test data available at the right time (read: all the time) can be very hard to accomplish. And therefore this problem should be tackled as soon as possible. The earlier you think of a solid strategy, the more you’ll benefit from it in the long run. And don’t get complacent when everything is a-ok right now. There’s always the possibility that you’re simply working on those tests that do not yet need a better thought out test data approach, but that doesn’t mean you’re safe forever!

Get your test environment in order
With all these new technologies like containers and virtual machines readily available, I’m still quite surprised to see how hard it is for some organizations to get a test environment set up. I’m still seeing this taking weeks, sometimes. And then the environment is unavailable for hours or even days for every new deployment. Needless to say that’s very counter-effective when you want to be able to run your automated tests on demand. So my advice would be to try and get this sorted out as soon as possible. Make sure that everybody knows that having a reliable and available test environment is paramount to the success of test automation. Because it is. And since all modern systems are increasingly interconnected and ever more depending on components both inside the organization and beyond those walls, don’t stop at getting a test environment for your primary application. Make sure that there’s a suitable test environment available and ready on demand for all components that your integration and end-to-end tests touch upon. Resort to service virtualization when there’s no ‘real’ alternative. Make sure that you can test whenever you want.

In the end, writing more automated tests is never the hard problem. Making sure that all things are in place that enable you to grow a successful test automation suite is. So, tackle that first, while your suite is still small, and increasing the coverage of your automated tests will become so much easier.