2016: My year in numbers

Now that the year is almost over, I thought it might be nice to jump on the end-of-year bandwagon and look back on 2016 a bit, and at the same time look ahead to what 2017 hopefully has in store for me. And since I deal with facts, let’s have a look at some numbers:

Number of conferences attended: 5
Conferences are a wonderful opportunity for learning, sharing, catching up with people and getting to know your peers and other people in the field. That’s why I actively try and attend or contribute to as many useful conferences as possible. This year, I’ve attended TestWorksConf, the Test Automation Day, the TestNet conference (twice) and I had my first international conference experience at TestCon in Vilnius, Lithuania. I’ve heard many useful things and met many great people, so I’ll definitely try and attend at least as many conferences next year. Three of them are already in my agenda (TestWorksConf, Test Automation Day and Romania Testing Conference), so there’s two more to go. One more, if you count Automation Guild (more on that later). Hopefully 2017 will include another adventure abroad, but I’m already quite thrilled to be in Romania next May!

Number of presentations given (not counting in house as part of consulting work): 4
Last year, I gave two presentations at different conferences: one at TestWorksConf, one at TestCon. Apart from that, I hosted (on invitation) two evening sessions for service providers in the testing field in the Netherlands. As it’s a personal goal of mine to become a better public speaker, I’m actively looking to increase this number in the next year. Even though I like hosting workshops even better than delivering talks (more on those below), public speaking is something I like to do and I’m looking forward to doing more of it. One talk has already been scheduled: my Automation Guild session on January 13th. I’ve also submitted a couple of conference proposals for 2017, so let’s hope they’ll be selected.

Number of workshops hosted: 3
I’ve hosted three technical workshops last year, twice on REST Assured and one on WireMock. All of these have been hosted at conferences. For next year, I am actively looking for more opportunities to host workshops and other training sessions, both at conferences as well as in house. I’m starting a partnership with my former employer, in which I’ll be responsible for delivering training on several topics related to test automation, so that’s a very good start! Some other announcements with regards to training offers might be made very soon as well..

Number of articles written: 7 (including an ebook)
Another thing I like doing is writing articles about things related to test automation and service virtualization that interest me. Last year, I’ve published six articles through various media (LinkedIn, O’Reilly, and a couple of magazines). Also, my very first (short) ebook saw the light, something I’m still pretty proud of. The first article for 2017 is already on its way, and we’re discussing a follow-up, so in this area too, 2017 is looking to be another great year. Of course, I’m keeping my eyes open for even more opportunities.

Number of clients served: 6
For IT consultancy, this is a pretty high number. This is partly because I had a couple of projects that ended prematurely (for several different reasons), but the shift I’ve been starting to make this year, moving away from 40-hour weeks at a single client, has had an effect on this number as well. Currently, I’m working for two different clients simultaneously, and I’m loving the flexibility and freedom this gives me in terms of time and attention. It’s also a shift away from being IN a development team (as a tester / an automation engineer) to a consultant supporting and serving development teams. 2017 might not see such a high number of clients, but one thing’s for sure: I won’t be working for a single client full-time if it can be helped. If only because there are so many other interesting things to do..

So, that’s 2016 almost done. To wrap things up, here are a couple of numbers related to this blog:

Number of posts published (including this one): 48
Somewhere in the first half of this year I committed myself to publishing a post every week, on the same day and the same time. No excuses. That publishing day and time is Wednesday at 08:00 GMT, for those who hadn’t noticed (which is probably most of you). Sticking to this schedule hasn’t always been easy, but it’s helped me enormously to keep thinking about things to write about and about the test automation and service virtualization fields in general. Therefore, for next year, the target will be 52. Again, one post, every Wednesday, 8 AM GMT, no excuses.

Number of comments received and sent (up to now): 802
I think WordPress counts all pingbacks as comments as well, and of course, my own comments don’t count, but that would still leave at least 300-400 solid responses and questions to the posts I’ve written. Wow. All I can say is keep them coming! I love feedback, both positive and negative, on the stuff I ramble about here.

Number of page views (up to now): 238.156
Again: wow. That’s a lot! Hopefully you’ll keep visiting this blog next year!

Should test automation be left to developers?

I am not a developer. I have a background in Computer Science, I know a thing or two about writing code, but I am definitely not a developer. This is similar to me knowing how to prepare food, but not being a cook. People that have met me in person probably remember and will possible be bored to death by me using this analogy (sorry!).

On the other hand, I also try to repeat as often as possible that test automation is software development, and that it should be treated as such. So, you might ask, how come I am working in test automation, yet I don’t call myself a developer? And more importantly, shouldn’t test automation be left to developers, if it really IS software development? Recent blog posts I’ve read and presentations I’ve heard have made me think a lot about this.

So, to start with the second and most important question: should test automation be left to developers? My answer would be yes.

And no.

Test automation should be left to developers, because
writing automated tests involves writing code. Personally, I don’t believe in codeless solutions that advertise that ‘anybody’ can create automated tests. So, since code writing is involved, and since nobody knows writing code better than a developer, I think that writing automated tests can best be done by a developer.

I’m going to use myself as an example here. Since writing code isn’t in my DNA, I find it takes me three times as long – at the minimum – to write automated tests compared to your average developer. And with high-pressure projects, short release cycles and the trend of having less and less testers for every developer in development teams, I just couldn’t keep up.

To make matters worse, new automation tools are being released what seems like every day. Especially within the Java and JavaScript worlds, although it might be just as good (or bad?) in other ecosystems, just less visible to me personally. I don’t know. For a while, as a non-developer, I’ve tried to keep up with all of these tools, but I just couldn’t manage. So, instead of frantically trying to keep up, I’ve made a career shift. This came naturally after I realized that

Test automation should not be left to developers, because
nobody knows testing better than testers. As I said, I (and possibly many people involved in test automation along with me) am not skilled enough to compete with the best on how to create automated tests. However, I think I (and a lot of others) can teach a lot of people a thing or two about the equally, if not more important questions of why you should (not) do test automation, and what tests should and should not be automated.

A lot of developers I have met don’t have a testing mindset (yet!) and tend to think solely along the happy path. ‘Let’s see if this works…’, rather than ‘let’s see what happens if I do this…’, so to say. When writing automated tests, just as with creating specs for the software that needs to be built, it requires a tester’s mindset to think beyond the obvious happy flow. This is why I think it isn’t wise to see test automation as a task purely for developers.

It also helps if people other than developers feel some sort of responsibility for creating automated tests. Given that most developers are busy people, they need to choose between writing tests and writing production code on a regular basis. Almost always, that decision will be made in favor of production code. This does make sense from a deadline perspective, but not always as much when you look at it from a quality and process point of view. Having people in other roles (testing, for example) stressing the need for a mature and stable automated testing suite will definitely improve the likelyhood of such a suite actually being created and maintained. Which might just benefit the people continuously fretting about those deadlines in the long end..

So, to answer the first question I posed at the beginning of this post: Why am I still working in test automation despite not being a developer? Because nowadays I mainly focus on helping people answer the why and the what of test automation. I do some stuff related to the how question as well, especially for smaller clients that are at the beginning of their test automation journey or that don’t have a lot of experienced developers in house, but not as much as before. I love to tinker with tools every now and then, if only just to keep up and remain relevant and hireable. I’m not as much involved in the day to day work of writing automated tests anymore, though. Which is fine with me, because it plays to my strengths. And I think that’ll benefit everybody, in the long end.

I think the test automation community could use a lot more highly skilled people that are (not) developers.

(Not so) useful metrics in test automation

In order to remain in control of test automation efforts, it’s a good idea to define metrics and track them, so that you can take action in case the metrics tell you your efforts aren’t yielding the right results. And even more important, it allows you to bask in glory if they do! But what exactly are good metrics when it comes to test automation? In this blog post, I’ll take a look at some metrics that I think are useful, and some that I think the automation world can easily do without. Note that I’m not even going to try and present a complete list of metrics that can be used to track your automation efforts, but hopefully the ones mentioned here can move you a little closer to the right path.

So, what do I think could be a useful metric when tracking the effectiveness and/or the results of your test automation efforts?

Feedback loop duration reduction
The first metric I’ll suggest here is not related to the quality of an application, but rather to the quality of the development process. At its heart, test automation is – or at least should be – meant to increase the effectiveness of testing efforts. One way to measure this is to track the time that elapses between the moment a developer commits a change and the moment (s)he is informed about the effects these changes have had on application quality. This time, also known as the feedback loop time, should ideally be as short as possible. If it takes two weeks before a developer hears about the negative (side) effects of a change, he or she will have long moved on to other things. Or projects. Or jobs. If, instead, feedback is given within minutest (or even seconds), it’s far easier to correct course directly. One way to shorten the feedback loop duration is by effective use of test automation, so use it wisely and track the effects that automation has on your feedback loop.

Customer journey completion rate
This might come as a surprise to some, but test automation, testing and software development in general is still an activity that serves a greater good: your customer base. In this light, it would make sense to have some metrics that relate directly to the extent to which a customer is able to use your application, right? A prime example of this would be the amount of predefined critical customer journeys that can (still) be completed by means of an automated test after a certain change to the software has been developed and deployed. By critical, I mean journeys that relate directly to revenue generation, customer acquisition and other such trivialities. The easier and more often you can verify (using automation) that these journeys can still be completed, the more trust you’ll have deploying that shiny new application version into your production environment.

False positives and negatives
Automated tests are only truly valuable if you can trust them fully to give you the feedback you need. That is: whenever an automated test case fails, it should be due to a defect (or an unnoticed change) in your application, not because your automation is failing (you don’t want false negatives). The other way around, whenever an automated test case passes, you should have complete trust that the component or application under test indeed works as intended (you want false positives even less). False negatives are annoying, but at least they don’t go by unnoticed. Fix their root cause and move on. If it can’t be fix, don’t be afraid to throw away the test, because if you can’t trust it, it’s worthless. False positives are the biggest pain in the backside, because they go by unnoticed. If all is green in the world of automation, it’s easy (and quite human) to trust the results, even when all you’re checking is that 1 equals 1 (see also below). One approach to detecting and fixing false positives, at least in the unit testing area, is the use of mutation testing. If this is not an option, be sure to regularly check your automated checks to see that they still have their desired defect detection dint (or 4D, coining my first ever useless automation acronym here!).

Where there are useful metrics, there are also ones that aren’t as valuable (or downright worthless)..

Code coverage
A metric that is often used to express the coverage of a suite of unit tests. The main problem I have with this metric is that in theory it all sounds perfectly sensible (‘every line of our code is executed at least once when we run our tests!’), but in practice, it doesn’t say anything about the quality and the effectiveness of the tests, nor does it say anything about actual application quality. For example, it’s perfectly possible to write unit tests that touch all lines of your code and then assert that 1 equals 1. Or that apples equal apples. These tests will run smoothly. They’ll pass every time, or at least until 1 does not equal 1 anymore (but I think we’re safe for the foreseeable future). Code coverage tools will show a nice and shiny ‘100%’. Yet it means nothing in terms of application quality.

Percentage of overall number of test cases that are automated
An exponent of the ‘automate all the things!!’ phenomenon. In theory, it looks cool: ‘We automated 83,56% of our tests!’. First of all, especially with exploratory testing (not my forte, so I might be wrong here), there is no such thing as a fixed and predetermined number of test cases anymore. As such, expressing the amount of automated tests as a percentage of a variable or even nonexistent number is worthless. Or downright lying (you pick one). There’s only one metric that counts in this respect, quoting Alan Page:

You should automate 100% of the tests that should be automated

Reduction in numbers of testers
Yes, it’s 2016. And yes, some organizations still think this way. I’m not even going to spend time explaining why ‘if we automate our tests, we can do with X fewer testers’ is horrible thinking anymore. However, in a blog post mentioning good, bad and downright ugly metrics related to test automation, it cannot be NOT mentioned.

Anyway, wrapping all this up, I think my view on automation metrics can be summarized like this: automation metrics should tell you something useful about the quality your application or your development process. Metrics related to automation itself or to something that automation cannot be used for (actual testing as a prime example) might not be worth tracking or even thinking about.