On looking back on 2017 and looking forward to 2018

As I like to do every year, now that 2017 has almost come to an end, I’m carving out some time to reflect on all that I’ve been working on this year. What has been successful? What needs working on? And most importantly, what are the things I’ll put my energy towards in 2018?

The start of On Test Automation – the sole proprietorship
Perhaps the most significant change I made this year was to quit working with The Future Group to venture out on my own under the On Test Automation name. On the other hand, nothing much has changed at all since then. I’m still working as a freelancer, I’m still doing a bit of consulting, some teaching, some writing, pretty much what I’ve been doing before I made this move. The only thing that has changed is the financial side of things, and the fact that I’m now really working for myself instead of mostly for myself. It’s just that little extra bit of freedom, albeit only psychologically. I would be very surprised if, at the end of 2018, I won’t still be freelancing like this. It has proven to be the optimal way of working for me, with total freedom over what I’m working on at what time and with whom. I’d like to further reduce the time I spend commuting a little more in 2018, though.

Consulting
On the consulting side of things, it has been a pretty strong year. I’ve been working on projects for 5 or 6 different clients, mostly as the person responsible for developing automation solutions, but also sometimes acting more like an automation coach of sorts. I’ve been lucky enough never to have to look for a new project for long. The job market for experienced automation engineers and consultants is so good over here I’ve had to turn down more projects than I’ve been able to accept. I’m considering myself a very lucky person in this respect, although I do like to think that the time I invest in learning, spreading my knowledge and networking has at least partly brought me to where I am now.

Where 80-90% of my working hours in 2017 was spent on consulting work, however, in 2018, I would like to slowly bring that down to around 50% to free up time for other activities.

Teaching
This year, I’ve written a couple of blog posts, most notably this one, about what I’d like to see changed in education around test automation and what I think good test automation training should look like. As you might have seen elsewhere on this site, I currently offer a couple of courses, and I am looking to expand my offerings in 2018. More importantly, I’d like to deliver significantly more training next year. Counting quickly, I’ve delivered about 10 days of training in 2017, mostly with clients, but I also did a very enjoyable workshop at the Romanian Testing Conference.

For 2018, I’d like to work towards teaching 5 or more days each month. This will require significant effort from my side, not only in actual teaching, but more importanty in marketing and promotion to make sure that I can deliver them in the first place. I’ve front-loaded some of that work by closing partnerships with a couple of other players in the field, and I’ve landed a couple of teaching gigs already (more on that in a future blog post, undoubtedly), but there’s much more work to be done if I want to achieve the ‘5 days of teaching per month’ goal.

Conferences
2017 was a relatively quiet year for me with regards to conferences. In the Netherlands, I only attended 2 (both organized by TestNet). Abroad, there was the previously mentioned Romanian Testing Conference in May as well as the splendid TestBash Manchester in October, making for a grand total of 4 conferences.

I expect 2018 to be busier on the conference front. In fact, I’ve got my agenda for the spring conference season pretty much filled up already with TestBash Netherlands (where I’ll be doing a workshop together with Ard Kramer), the TestNet spring conference (where I probably won’t be speaking or hosting a workshop for the first time in a while), my second Romanian Testing Conference (where I’ll be doing both a workshop and a talk this time) and the Test Automation Day (which I missed this year due to being on holiday and where I hope to be accepted as a speaker for the first time this coming year). So that’s four conferences before the summer. And that’s not counting the Agile Tour Vienna meetup in March, to which I’m invited to do a talk / live coding session as well. And I haven’t even started to think about the fall conference season yet (that’s a lie, some negotiations are underway).

Writing
Including this one, I’ve published 46 blog posts on this site in 2017. I started out with the promise of publishing a blog post every week, and I’ve kept true to my word for most of the year, but last month I came to the conclusion that I’ve been spreading myself a little thin on the writing front. Apart from these blog posts, I also wrote and published 15 articles on other websites, including TechBeacon, StickyMinds and LinkedIn. That’s a lot of writing, I can tell you.

Next year, I’ll probably be blogging less, in an effort to create higher quality output. I’ll also still be doing articles for other websites (I’m working on two of those as we speak). I’m aiming to publish at least one quality blog post on here each month, plus some reviews of conferences, books and other resources whenever I feel like it. That should free up some time to invest in other interesting things that I encounter.

All in all, 2017 has been a great year for me, I’ve met many interesting people and worked on a lot of interesting stuff. 2018 will hopefully be a year of spreading myself a little less thin, instead focusing more on the good stuff. As always, I’ll keep you posted.

On preventing your test suite from becoming too user interface-heavy

In August of last year, I published a blog post talking about why I don’t like to think of automation in terms of frameworks, but rather in terms of solutions. I’ve softened a little since then (this is probably a sign of me getting old..), but my belief that building a framework might lead to automation engineers subsequently trying to fit every test left, right and center into that framework still stands. One example of this phenomenon in particular I still see too often: engineers building a feature-rich end-to-end automation framework (for example using Selenium) and then automating all of their tests using that framework.

This is what I meant in the older post by ‘framework think’: because the framework has made it so easy for them to add new tests, they skip the step where they decide what would be the most efficient approach for a specific test and blindly add it to the test suite run by that very framework. This might not lead to harmful side effects in the short term, but as the test suite grows, chances are high that it becomes unwieldy, that the time it takes to complete a full test run becomes unnecessarily long and that maintenance efforts are not being outweighed by the added value of having the automated tests any more.

In this post, I’d like to take the practical approach once more and demonstrate how you can take a closer look at your application and decide if there might be a more efficient way to implement certain checks. We’re going to do this by opening up the user interface and see what happens ‘under the hood’. I’m writing this post as an addendum to my ‘Building great end-to-end tests with Selenium and Cucumber / SpecFlow‘ course, by the way. Yes, that’s right, one of the first things I talk about during my course on writing tests with Selenium is when not to do so. I firmly believe that’s the on of the very first steps towards creating a solid test suite: deciding what should not be in it.

The application under test
The application we’re going to write tests for is an online mortgage orientation tool, provided by a major Dutch online bank. I’ve removed all references to the client name, just to be sure, but it’s not like we’re dealing with sensitive data here. The orientation tool is a sequence of three forms, in which people that are interested in a mortgage fill in details about their financial situation, after which the orientation tool gives an indication of whether or not the applicant is eligible for a mortgage, as well as an estimate of the maximum amount of the mortgage, the interest rate and the monthly installments payable.

Our application under test - the mortgage orientation tool

What are we going to automate?
Now that we know what our application under test does, let’s see what we should automate. We’ll assume that there is a justified need for automated checks in the first place (otherwise this would have been a very short blog post!). We’ll also assume that, maybe for tests on some other part of the bank’s website, there is already a solid automation framework written around Selenium in place. So, this being a website and all, it makes sense to write some additional checks and incorporate them into the existing framework.

First of all, let’s try and make sure that the orientation tool can be used and completed, and that it displays a result. I’d say, that would be a good candidate for an automated test written using Selenium, since it confirms that the application is working from an end user perspective (there is value in the test) and I can’t think of a lower level test that would give me the same feedback. Since there are a couple of different paths through the orientation tool (you can apply for a mortgage alone or with someone else, some people have a house to sell while others have not, there are different types of contracts, etc.), I’d even go as far as to say you’ll need more than one Selenium-based test to be able to properly claim that all paths can be traversed by an end user.

Next, I can imagine that you’d want to make sure that the numbers that are displayed are correct, so your customers aren’t misinformed when they complete the orientation tool. This would lead to some massive issues of distrust later on in the mortgage application process, I’d assume.. Since we’ve been able to add the previous tests so easily to our existing framework, it makes sense to add some more tests that walk through the forms, add the data required to trigger a specific expected outcome and verify that the result screen we saw in the screenshot above displays the expected numbers. Right?

No. Not right.

It’s highly likely that the business logic used to perform the calculation and serve the numbers displayed on screen isn’t actually implemented in the user interface. Rather, it’s probably served up by a backend service containing the business logic and rules required to perform the calculations (and with mortgages, there are quite a few of those business rules, I’ve been told..). The user interface takes the values entered by the end user, sends them to a backend service that performs calculations and returns the values indicating mortgage eligibility, interest rate, height of monthly installment, etc., which are then interpreted and displayed again by that same user interface.

So, since the business logic that we’re verifying isn’t implemented in the user interface, why use the UI to verify it in the first place? That would highly likely only lead to unnecessarily slow tests and shallow feedback. Instead, let’s look if there’s a different hook we can use to write tests.

I tend to use on of two different tactics to find out if there are better ways to write automated tests in cases like these:

  1. Talk to a developer. They’re building the stuff, so they’ll probably know more about the architecture of your application and will likely be happy to help you out.
  2. Use a network analyzing tool such as Fiddler or WireShark. Tools like these two let you see what happens ‘under water’ when you’re using the user interface of a web application.

Normally, I’ll use a combination of both: find out more about the architecture of an application by talking to developers, then using a network analyzer (I prefer Fiddler myself) to see what API calls are triggered when I perform a certain action.

Analyzing API calls using Fiddler
So, let’s put my assumption that there’s a better way to automate the tests that will verify the calculations performed by the mortgage orientation tool to the test. To do so, I’ll fire up Fiddler and have it monitor the traffic that’s being sent back and forth between my browser and the application server while I interact with the orientation tool. Here’s what that looks like:

Traffic exchanged between client and server in our mortgage orientation tool

As you can see, there’s a mortgage orientation API with a Calculate operation that returns exactly those numbers that appear on the screen. See the number I marked in yellow? It’s right there in the application screenshot I showed previously. This shows that pretty much all that the front end does is performing calls to a backend API and presenting the data returned by it in a manner attractive to the end user. This means that it would not make sense to use the UI to verify the calculations. Instead, I’d advise you to mimic the API call (or sequence of calls) instead, as this will give you both faster and more accurate feedback.

To take things even further, I’d recommend you to dive into the application even deeper and see if the calculations can be covered with a decent set of unit tests. The easiest way to do this is to start talking to a developer and see if this is a possibility, and if they haven’t already done so. No need to maintain two different sets of automated checks that cover the same logic, and no need to cover logic that can be tested through unit tests with API-level checks..

Often, though, I find that writing tests like this at the API level hits the sweet spot between coverage, effort it takes to write the tests and speed of execution (and as a result, length of the feedback loop). This might be because I’m not too well versed in writing unit tests myself, but it has worked pretty well for me so far.

Deciding what to automate where: a heuristic
The above has just been one example where it would be better (as well as easier) to move specific checks from the UI level to the API level. But can we make some more generic statements about when to use UI-level checks and when to dive deeper?

Yes, we can. And it turns out, someone already did! In a recent blog post called ‘UI Test Heuristic: Don’t Repeat Your Paths‘, Chris McMahon talked about this exact subject, and the heuristic he presents in his blog post applies here perfectly:

  • Check that the end user can complete the mortgage orientation tools and is shown an indication of mortgage eligibility and associated figures > different paths through the user interface > user interface-level tests
  • Check that the figures served up by the mortgage orientation tool are correct > repeating the same paths multiple times, but with different sets of input data and expected output values > time to dive deeper

So, if you want to prevent your automated test suite from becoming too bloated with UI tests, this is a rule of thumb you can (and frankly, should) apply. As always, I’d love to hear what you think.

On quality over quantity and my career journey

As you might have read in last week’s blog post, TestBash Manchester, the talks I’ve heard there and the discussions I had around the event with other speakers and attendees, left me with a lot to think about. Especially Martin Hynie’s talk on tester craftsmanship, apprentices, journeymen and masters of the craft led to me asking a lot of questions to myself on where I am now, how I ended up where I am today, where I want to go and, most importantly, if the things that I am doing at the moment contribute to, or maybe hinder me, in my own journey towards who I want to become.

Martin’s talk and how he described masters of a craft confirmed me that that, for me, is what I do want to become: a master in the craft of automation. Someone that others turn to when they need help, and someone that is able to help and guide others on their way to becoming a master -or at least a better craftsperson- themselves. I also immediately realized that I’m nowhere near that point yet.

I might be on my way, possibly (hopefully!) even on the right way, but having thought about this for a bit now, it once more occurred to me that there is so much more to learn. Some aspects that I need to improve are directly tied to automation and testing, others are skills that are more broadly applicable (public speaking, teaching, communication skills, to name just a few), but all in all, there’s a lot of learning left to do.

I am very much looking forward to taking the next steps on my path towards mastery, but I also realize that I need to get rid of some superfluous baggage at the moment, consisting mostly of activities that take up a lot of my time yet aren’t contributing (enough) to my journey. In the words of the German designer and academic Dieter Rams, it’s time for ‘less but better’, or ‘weniger aber besser’ as he puts it himself, being a German and all..

Anyway, there are a couple of work-related activities that I will need to get rid of -or at least change significantly- in order to carve out the time required to work on the important. Starting with the projects I’m working on. I’ve just wrapped up one, but I’m still working on two different projects in parallel.

Where I used to think this was the ideal situation to be in (I do get bored quickly if I’m working on the same thing for too long), I’ve slowly started to come to the realization that all this context switching is driving down the quality of my work. Believe, no matter how hard you try dedicating specific days to specific projects, there will always be overlap in the form of emails, phone calls and other seemingly urgent, and sometimes even important, interruptions. Just like with other forms of multitasking, I lose a lot of time moving my mind from one project to the other and back again, sometimes multiple times a day.

What doesn’t help is that not all of the projects I’ve been working on lately have been equally satisfying (and in specific cases, that’s putting it mildly..). Doing only one project at a time should allow me to think more clearly about whether or not the project is, in fact, a good fit for me. So, effective as soon as I wrap up my current projects, I’ll start committing myself to working on just a single client project (meaning by-the-hour consulting work) at a time. Ideally, that would take up 3 (maybe sometimes 4, maybe sometimes 2) days of my working week, ensuring that I am both set with regards to my financial commitments (gotta feed the kids!) as well as have enough time left to dedicate to the other things I want and/or feel the need to work on. Most of those things revolve around training courses, workshops and a bit of public speaking, by the way.

Committing to less but better also means that I’m, at least for the moment, giving up on writing weekly blog posts for this site. Even though it is a highly rewarding activity, it takes up a lot of time to plan, write and review blog posts. I’ll leave the discussion on whether or not my blog posts look like significant time has been put into it to you.. Instead, I’ll shift towards writing at least one blog post per month.

The good news is that this will leave me more time to do research and thinking for my blog posts, which (at least theoretically) should lead to higher quality output. Again, less but better.. I might post more often than once a month, in case I’ve read a good book related to testing or automation, a conference experience I want to share or anything else I feel like writing about, since those posts take less effort in my case. However, I think I need to stop pressurizing myself to write a weekly blog post, since it might start to affect the quality soon. If it hasn’t started doing so already.

Lastly, I am considering looking for a mentor who can help me take the next steps on my journey towards mastery. The above measures I’m taking should help freeing up time to do the things I feel are important (e.g., more time for learning, more time to invest in teaching and developing courses), but I am by now quite convinced that I might benefit from a mentor that helps me to navigate the career and life path that’s ahead of me. I’d love to hear from others who either have been on roughly the same point in their career and have (or have not) benefited from having a mentor, or who can help me find a good mentor. All input is greatly appreciated.

So, in short, you’ll hear less from me from this moment on, but hopefully also more. And better. I’m looking forward to the next stage of my journey.