How I continuously hone my skills and why you should too

As a consultant, and even more so as a freelance consultant, it is imperative to stay relevant and be able to land your next project every time. Or, and that’s the position I prefer to be in, have your next project(s) find you. I thought it would be a nice idea to share with you how I go about trying to keep up in the ever changing world of test automation and testing and software development in general.

Before I dive into the strategies I employ to remain relevant, though, I’d like to stress once more that test automation, like testing and software development, is a craft. I’ve said this so many times already, but I’ll just press ‘repeat’ and say it once more: being a good test automation crafts(wo)man requires specific skills that need to be constantly sharpened it you want to remain relevant. And I don’t think I’m going out on a limb here when I assume that you actually want to remain relevant.

Also, I’m writing this post partly as preparation for a potential mentee that contacted me with some questions regarding test automation career development. If this comes through it’ll be the first time for me in a mentor role, and that’s something I’m both very much looking forward to and scared as hell of..

So, what is it that I do to try and stay on the forefront of the test automation field as best as I can?

Discover your way to provide value
I’m no proponent of the ‘jack of all trades, master of none’, excuse me, ‘generalist’ approach to career development. I think in order to be truly valuable to any team or organization, it’s important to pick your superpower and grow it as best you can. For me, that’s helping organizations define and implement the best possible way for their test automation efforts. For others, that might be writing the best possible Selenium tests, or being the best possible software tester, or … It doesn’t matter WHAT your superpower is, as long as it provides value to the software development process you’re likely to remain relevant for the duration of your career. You will need to monitor whether or not you’re providing value constantly, though.

Get hired for the role you want to grow into
This applies especially to freelance consultants. Teams and organizations that hire you tend to do that because you know how to do something well, simply because you’ve done it before. That’s all fine and dandy in the short term, of course (there’s always the next mortgage installment or child care bill that needs to be paid), but there’s a real risk of becoming a one trick pony in the longer term. Personally, I’m always evaluating a project offer for things I can learn myself and whether those are things that I actually want to learn, i.e., whether the things I’ll be learning will contribute to me being a slightly better consultant or trainer after the project has come to an end. I tend to get bored on projects quite easily, and that’s especially the case when I’m repeating the same stuff over and over for a couple of months. And that doesn’t help me nor my client.

For all of you that are employees, it might be a little easier, simply because organizations are often willing to invest in your professional and personal development. Still, I’d advise you to always ask as many questions on possibilities to grow and explore new things when in an interview. Being in a job that allows you to explore, learn and grow is much, much more important than an extra couple of hundreds bucks every month. Really, it is. Also, grow as a crafts(wo)man and that pay rise will come, if not at your current employer, then in the form of an offer from another one. At which point you’re advised to look for the professional and personal development options THEY provide, of course.

Honing your skills

Attend and speak at conferences
Next to my day to day work, I find that one of the best ways to grow as a consultant is by attending and contributing to conferences. And the talks and workshops offered by the organizational committee are but one of the ways you can learn and benefit from being at a conference. For me, meeting peers, discussing the craft with them, as well as some good old fashioned networking often proves to be even more valuable in the longer term.

And attending is just one side of the whole conference thing: actually speaking at one might be an even better way to improve your craftsmanship. Yes, preparing a talk takes a lot of time, but it is an excellent way to organize your thoughts and experiences and to ‘find your voice’. And yes, going up on stage is nerve-wracking, but remember: that’s what every speaker feels, even those that are way more experienced. Not convinced? Just read up on the #SpeakerConfessions hashtag on Twitter to see what I mean.

Reading blogs and listening to podcasts
There’s a wider testing and automation world out there, outside of the confines of your office (or wherever you do your work). I’ve been gaining access to the thoughts, experiences, successes and failures of fellow automation engineers and consultants for a couple of years now, simply by reading their blogs, learning from them and applying what I’ve learned in my own work.

Instead of singling out individual blogs that I read (which is quite a long list anyway), I’ll mention two of the greatest starting points for blog reading here: the Testing Curator Blog by Matt Hutchison and the Ministry of Testing blog feed. Both are excellent sources for all things testing and automation to read for your pleasure and learning.

Another way to learn from others is by listening to testing and automation podcasts. Since I spend a significant amount of time in the car (it’s been getting less and less, though, since I started reshaping my career, which is something I do like), I like to spend that time in a useful way, and one of the best is by listening to podcasts. There’s a large amount of software testing podcasts out there, so I’d suggest you to do a search on iTunes and find something you like. As for me, I almost never miss an episode of Joe Colantonio’s TestTalks or Keith Klain’s Quality Remarks podcast. There are some other podcasts that I listen to on and off again as well.

Writing your own blog
There’s no learning from other people’s experiences through blog posts without there being people that actually write those blog posts, obviously. I’d recommend you considering to start your own. To me, there’s no better way to organize my thoughts and express how I feel about certain topics than writing a blog post (or a series of blog posts) on those topics. They’re also a great way to showcase your projects, thoughts, insights and experiences to the wider world, although it must be said that writing a blog post or two and expecting the work and the world to come to you might lead to a little bit of disappointment. Instead, you should remember that you’re writing for yourself in the first place, anyone else reading or reacting on your posts is a bonus. Having said that, when you’re consistently (and consistency IS key) putting out decent content and showing it to the outside world, the interaction and feedback will come.

It’s just not going to happen overnight. As an example: it took me three years to get that little bit of traction I’m having at the moment. I’m very glad I stuck with it, though, because so much good has come out of it! Being given the opportunity to write and publish an ebook, being offered speaking opportunities abroad, writing articles for industry websites, none of it wouldn’t have happened (or at least it would have taken me a lot longer) without this blog. I’m grateful for that every day, and apart from the interaction with readers, it’s one of the main reasons I keep at it.

Wrapping this up, I’d like to repeat that continuously honing your skills as a testing or automation engineer, consultant or whatever it is that you’re doing is imperative to remaining relevant and staying ahead in the competitive business we’re in. Hopefully the above has given you some motivation or pointers to start being an even better crafts(wo)man than the one you already are. Again, I’d love to hear your thoughts and feedback.

On including automation in your Definition of Done

Working with different teams in different organizations means that I’m regularly faced with the question of whether and how to include automation in the Definition of Done (DoD) that is used in Agile software development. I’m not an Agilist myself per se (I’ve seen too many teams get lost in overly long discussions on story points and sticky notes), but I DO like to help people and teams struggling with the place of automation in their sprints. As for the ‘whether’ question: yes, I definitely think that automation should be included in any DoD. The answer to the ‘how’ of including, a question that could be rephrased as the ‘what’ to include, is a little more nuanced.

For starters, I’m not too keen on rigid DoD statements like

  • All scenarios that are executed during testing and that can be automated, should be automated
  • All code should be under 100% unit test coverage
  • All automated tests should pass at least three consecutive times, except on Mondays, when they should pass four times.

OK, I haven’t actually seen that last one, but you get my point. Stories change from sprint to sprint. Impact on production code, be it new code that needs to be written, existing code that needs to be updated or refactored or old code that needs to be removed (my personal favorite) will change from story to story, from sprint to sprint. Then why keep statements regarding your automated tests as rigid as the above examples? Doesn’t make sense to me.

I’d rather see something like:

Creation of automated tests is considered and discussed for every story and their overarching epic and applied where deemed valuable. Existing automated tests are updated where necessary, and removed if redundant.

You might be thinking ‘but this cannot be measured, how do we know we’re doing it right?’. That’s a very good question, and one that I do not have a definitive answer for myself, at least not yet. But I am of the opinion that knowing where to apply automation, and more importantly, where to refrain from automation, is more of an art than a science. I am open to suggestions for metrics and alternative opinions, of course, so if you’ve got something to say, please do.

Having said that, one metric that you might consider when deciding whether or not to automate a given test or set of tests is whether or not your technical debt increases or decreases. The following consideration might be a bit rough, but bear with me. I’m sort of thinking out loud here. On the one hand, given that a test is valuable, having it automated will shorten the feedback loop and decrease technical debt. However, automating a test takes time in itself and increases the size of the code base to be maintained. Choosing which tests to automate is about finding the right balance with regards to technical debt. And since the optimum will likely be different from one user story to the next, I don’t think it makes much sense to put overly generalizing statements with regards to what should be automated in a DoD. Instead, for every story, ask yourself

Are we decreasing or increasing our technical debt when we automate tests for this story? What’s the optimum way of automating tests for this story?

The outcome might be to create a lot of automated tests, but it might also be to not automate anything at all. Again, all depending on the story and its contents.

Another take on the question whether or not to include automated test creation in your DoD might be to discern between the different scope levels of tests:

  • Creating unit tests for the code that implements your user story will often be a good idea. They’re relatively cheap to write, they run fast and thereby, they’re giving you fast feedback on the quality of your code. More importantly, unit tests act as the primary safety net for future development and refactoring efforts. And I don’t know about you, but when I undertake something new, I’d like to have a safety net just in case. Much like in a circus. I’m deliberately refraining from stating that both circuses and Agile teams also tend to feature a not insignificant number of clowns, so forget I said that.
  • You’ll probably also want to automate a significant portion of your integration tests. These tests, for example executed at the API level, can be harder to perform manually and are relatively cheap to automate with the right tools. They’re also my personal favorite type of automated tests, because they’re at the optimum point between scope and feedback loop length. It might be harder to write integration tests when the component you’re integrating with is outside of your team’s control, or does not yet exist. In that case, simulation might need to be created, which requires additional effort that might not be perceived as directly contributing to the sprint. This should be taken into account when it comes to adding automated integration tests to your DoD.
  • Finally, there’s the end-to-end tests. In my opinion, adding the creation of this type of tests to your DoD should be considered very carefully. They take a lot of time to automate (even with an existing foundation), they often use the part of the application that is most likely to change in upcoming sprints (the UI), and they contribute the least to shortening the feedback loop.

The ratio between tests that can be automated and tests for which it make sense to be automated in sprint can be depicted as follows. Familiar picture?

Should you include automated tests in your Definition of Done?

Please note that like the original pyramid, this is a model, not a guideline. Feel free to apply it, alter it or forget it.

Jumping back to the ‘whether’ of including automation in your DoD, the answer is still a ‘yes’. As can be concluded from what I’ve talked about here, it’s more of a ‘yes, automation should have been considered and applied where it provides direct value to the team for the sprint or the upcoming couple of sprints’ rather than ‘yes, all possible scenarios that we’ve executed and that can be automated should have been automated in the sprint’. I’d love to hear how other teams have made automation a part of their DoD, so feel free to leave a comment.

And for those of you who’d like to see someone else’s take on this question, I highly recommend watching this talk by Angie Jones from the 2017 Quality Jam conference:

Creating executable specifications with Spectrum

One of the most important features of a good set of automated tests is that they require a minimal amount of explanation and/or documentation. When I see someone else’s test, I’d like to be able to instantly see what its purpose is and how it’s constructed in terms of setting up > execution > verification (for example using given/when/then or arrange/act/assert). That’s one of the reasons that I’ve been using Cucumber (or SpecFlow, depending on the client) in several of my automated test solutions, even though the team wasn’t doing Behaviour Driven Development.

It’s always a good idea to look for alternatives, though. Last week I was made aware of Spectrum, which is, as creator Greg Haskins calls it “A BDD-style test runner for Java 8. Inspired by Jasmine, RSpec, and Cucumber.”. I’ve been working with Jasmine a little before, and even though it took me a while to get used to the syntax and lambda notation style, I liked the way it provided a way to document your tests directly in the code and produce human readable results. Since Spectrum is created as a Jasmine-like test runner for Java, plus it provides support for the (at least in the Java world) much more common Gherkin given/when/then syntax, I thought it’d be a good idea to check it out.

Spectrum is ‘just’ yet another Java library, so adding it to your project is a breeze when using Maven or Gradle. Note that since Spectrum uses lambda functions, it won’t work unless you’re using Java 8. Spectrum runs on top of JUnit, so you’ll need that too. It works with all kinds of assertion libraries, so if you’re partial to Hamcrest, for example, that’s no problem at all.

As I said, Spectrum basically supports two types of specs: the Jasmine-style describe/it specification and the Gherkin-style given/when/then specification using features and scenarios. Let’s take a quick look at the Jasmine-style specifications first. For this example, I’m resorting once again to REST Assured tests. I’d like to verify whether Max Verstappen is in the list of 2017 drivers in one test, and whether both Fernando Alonso and Lewis Hamilton are in that list too in another test. This is how that looks like with Spectrum:

@RunWith(Spectrum.class)
public class SpectrumTests {{

	describe("The list of drivers for the 2017 season", () -> {

		ValidatableResponse response = get("http://ergast.com/api/f1/2017/drivers.json").then();

		String listOfDriverIds = "MRData.DriverTable.Drivers.driverId";

		it("includes Max Verstappen", () -> {

			response.assertThat().body(listOfDriverIds, hasItem("max_verstappen"));
		});

		it("also includes Fernando Alonso and Lewis Hamilton", () -> {

			response.assertThat().body(listOfDriverIds, hasItems("alonso","hamilton"));
		});
	});
}}

Since Spectrum runs ‘on top of’ JUnit, executing this specification is a matter of running it as a JUnit test. This results in the following output:

Spectrum output for Jasmine-style specs

Besides this (admittedly quite straightforward) example, Spectrum also comes with support for setup (using beforeEach and beforeAll) and teardown (using afterEach and afterAll), focusing on or ignoring specific specs, tagging specs, and more. You can find the documentation here.

The other type of specification supported by Spectrum is the Gherkin syntax. Let’s say I want to recreate the same specifications as above in the given/when/then format. With Spectrum, that looks like this:

@RunWith(Spectrum.class)
public class SpectrumTestsGherkin {{

	feature("2017 driver list verification", () -> {

		scenario("Verify that max_verstappen is in the list of 2017 drivers", () -> {

			final Variable<String> endpoint = new Variable<>();
			final Variable<Response> response = new Variable<>();

			given("We have an endpoint that gives us the list of 2017 drivers", () -> {
				
				endpoint.set("http://ergast.com/api/f1/2017/drivers.json");
			});

			when("we retrieve the list from that endpoint", () -> {
				
				response.set(get(endpoint.get()));
			});
			then("max_verstappen is in the driver list", () -> {
				
				response.get().then().assertThat().body("MRData.DriverTable.Drivers.driverId", hasItem("max_verstappen"));
			});
		});

		scenarioOutline("Verify that there are also some other people in the list of 2017 drivers", (driver) -> {

			final Variable<String> endpoint = new Variable<>();
			final Variable<Response> response = new Variable<>();

			given("We have an endpoint that gives us the list of 2017 drivers", () -> {
				
				endpoint.set("http://ergast.com/api/f1/2017/drivers.json");
			});

			when("we retrieve the list from that endpoint", () -> {
				
				response.set(get(endpoint.get()));
			});
			then(driver + " is in the driver list", () -> {
				
				response.get().then().assertThat().body("MRData.DriverTable.Drivers.driverId", hasItem(driver));
			});
		},

		withExamples(
				example("hamilton"),
				example("alonso"),
				example("vettel")
			)
		);
	});
}}

Running this spec shows that it does work indeed:

Spectrum output for Gherkin-style specs

There are two things that are fundamentally different from using the Jasmine-style syntax (the rest is ‘just’ syntactical):

  • Support for scenario outlines enables you to create data driven tests easily. Maybe this can be done too using the Jasmine syntax, but I haven’t figured it out so far.
  • If you want to pass variables between the given, the when and the then steps you’ll need to do so by using the Variable construct. This works with the Jasmine-style syntax too, but you’ll likely need to use it more in the Gherkin case (since ‘given/when/then’ are three steps, where ‘it’ is just one). When your tests get larger and more complex, having to use get() and set() every time you want to access or assign a variable might get cumbersome.

Having said that, I think Spectrum is a good addition to the test runner / BDD-supporting tool set available for Java, and something that you might want to consider using for your current (or next) test automation project. After all, any library or tool that makes your tests and/or test results more readable is worth taking note of. Right?

You can find a small Maven project containing the examples featured in this blog post here.