On automation implementation frustration

Recently (as in, in the last couple of months) I’ve wrapped up a few automation projects that have left me less than satisfied. Not because there were no technical challenges – there were plenty. Not because I felt bored – if I’m getting bored, I simply move on. Not because I didn’t learn anything new – I’ve become a much better engineer in the last couple of months and learned lots about creating useful automation efforts.

No, my dissatisfaction was caused by something different. Something I should have seen coming. Something I, in retrospect, should have addressed earlier.

But before I explain what caused this dissatisfaction, first, let’s take a quick look at what a typical project I’m working on looks like. Usually, it starts with a client with a test automation-related challenge. Sometimes they’re just getting started. Sometimes they’ve tried some things already, only to see it fail. In any case, at some point in time, they decide it’s a good idea to hire me (maybe that’s where things go wrong, no comment.).

I then get to work, usually starting out by asking a lot of questions. What does the application under test do? What does the software development process look like? What do the testers do? Where’s the (perceived) risk? What do the stakeholders think to gain from introducing test automation? What have they tried already and why did it fail? All questions that are important for me when deciding on the best possible next step(s).

Then, I usually start getting involved in the actual automation. Sometimes that means building a brand new solution. Sometimes it’s training others to do so, or to maintain and extend what I built. In other projects, it’s running awareness workshops to remind people why they’re implementing test automation in the first place, and to help them get realistic about the expectations around it. Often, it’s a mixture of all of these activities.

I’m not one to boast, but most of the time, things tend to go well during the project. I’ve seen and written enough horrible automation in the past to recognize and know what works and what doesn’t, and as a result, most of the time, I’m able to figure out an approach that brings value to the development process instead of being a time and money drain. So, that’s not the problem. There IS a problem though, and that’s when I start wrapping up.

Too often, by the time I am preparing for my exit from the project, I get the feeling that a lot of the work I’ve done has been in vain. There are no tangible clues that support this feeling, but still, I sometimes just know that once I walk out of the office for the last time, the automation I’ve created will become shelfware. The most important reasons for that? Teams that do not see test automation as software development, and teams that continuously give priority to feature delivery, often pushed by management setting deadlines. The latter is especially cruel when those same organizations claim they ‘do Agile’ and by that are able to ‘deliver software faster’. Sure, that might work in the short term, but it’s not a strategy that will result in a sustainable pace of delivery in the longer term. But I digress.

Now, I’ll be the first one to (at least partly) blame myself for the test automation starting to gather dust once I’m gone. In retrospect, in these past projects I did things right, like deciding on what to automate before starting out, and deciding what approach would be the most effective in terms of coverage, speed of execution and maintainability. However, in some cases, I seem to have forgotten something even more important: creating the right level of awareness and setting the right expectations for the automation. Looking back, I should have made a bigger effort showing the teams and organizations I work with that test automation isn’t something you do once. Or when you have the time. Instead, it’s a software development project within a software development project, and therefore it should be treated as such.

Writing about this here and now means I’ve learned a valuable lesson. But more importantly, I hope to remind you to not make the same mistakes I made too often in the past, by just getting started without keeping the end in mind. I know I will do better in the future. I hope you do too. Start asking questions like ‘who will be responsible for maintaining and extending the automation once I’m gone?’, ‘how are we going to make and keep automation an integral part of our development process?’, and so on. Don’t repeat my mistakes. Start with the end in mind.

As I said, I learned my lesson here. In the project I’m working on at the moment, I am currently working hard at creating the right amount of awareness, and helping the organization decide who is going to take ownership of the automation solutions I create once I’m gone. I set my last day of working at this project on April 26th, so there’s plenty of time. But as with a lot of things, time flies, and making your exit as smooth and as fulfilling as possible isn’t something you can start doing two weeks before you’ll bring the goodbye cake. And this project has an additional complicating factor in that it is the first time that they are executing a software development project on this scale. It’ll make for an interesting three months, I’m sure. But I’m also sure that once I do say goodbye to this team, I know that the automation I delivered will be in good hands. I’m looking forward to walking away much more satisfied this time.

On looking back on 2017 and looking forward to 2018

As I like to do every year, now that 2017 has almost come to an end, I’m carving out some time to reflect on all that I’ve been working on this year. What has been successful? What needs working on? And most importantly, what are the things I’ll put my energy towards in 2018?

The start of On Test Automation – the sole proprietorship
Perhaps the most significant change I made this year was to quit working with The Future Group to venture out on my own under the On Test Automation name. On the other hand, nothing much has changed at all since then. I’m still working as a freelancer, I’m still doing a bit of consulting, some teaching, some writing, pretty much what I’ve been doing before I made this move. The only thing that has changed is the financial side of things, and the fact that I’m now really working for myself instead of mostly for myself. It’s just that little extra bit of freedom, albeit only psychologically. I would be very surprised if, at the end of 2018, I won’t still be freelancing like this. It has proven to be the optimal way of working for me, with total freedom over what I’m working on at what time and with whom. I’d like to further reduce the time I spend commuting a little more in 2018, though.

Consulting
On the consulting side of things, it has been a pretty strong year. I’ve been working on projects for 5 or 6 different clients, mostly as the person responsible for developing automation solutions, but also sometimes acting more like an automation coach of sorts. I’ve been lucky enough never to have to look for a new project for long. The job market for experienced automation engineers and consultants is so good over here I’ve had to turn down more projects than I’ve been able to accept. I’m considering myself a very lucky person in this respect, although I do like to think that the time I invest in learning, spreading my knowledge and networking has at least partly brought me to where I am now.

Where 80-90% of my working hours in 2017 was spent on consulting work, however, in 2018, I would like to slowly bring that down to around 50% to free up time for other activities.

Teaching
This year, I’ve written a couple of blog posts, most notably this one, about what I’d like to see changed in education around test automation and what I think good test automation training should look like. As you might have seen elsewhere on this site, I currently offer a couple of courses, and I am looking to expand my offerings in 2018. More importantly, I’d like to deliver significantly more training next year. Counting quickly, I’ve delivered about 10 days of training in 2017, mostly with clients, but I also did a very enjoyable workshop at the Romanian Testing Conference.

For 2018, I’d like to work towards teaching 5 or more days each month. This will require significant effort from my side, not only in actual teaching, but more importanty in marketing and promotion to make sure that I can deliver them in the first place. I’ve front-loaded some of that work by closing partnerships with a couple of other players in the field, and I’ve landed a couple of teaching gigs already (more on that in a future blog post, undoubtedly), but there’s much more work to be done if I want to achieve the ‘5 days of teaching per month’ goal.

Conferences
2017 was a relatively quiet year for me with regards to conferences. In the Netherlands, I only attended 2 (both organized by TestNet). Abroad, there was the previously mentioned Romanian Testing Conference in May as well as the splendid TestBash Manchester in October, making for a grand total of 4 conferences.

I expect 2018 to be busier on the conference front. In fact, I’ve got my agenda for the spring conference season pretty much filled up already with TestBash Netherlands (where I’ll be doing a workshop together with Ard Kramer), the TestNet spring conference (where I probably won’t be speaking or hosting a workshop for the first time in a while), my second Romanian Testing Conference (where I’ll be doing both a workshop and a talk this time) and the Test Automation Day (which I missed this year due to being on holiday and where I hope to be accepted as a speaker for the first time this coming year). So that’s four conferences before the summer. And that’s not counting the Agile Tour Vienna meetup in March, to which I’m invited to do a talk / live coding session as well. And I haven’t even started to think about the fall conference season yet (that’s a lie, some negotiations are underway).

Writing
Including this one, I’ve published 46 blog posts on this site in 2017. I started out with the promise of publishing a blog post every week, and I’ve kept true to my word for most of the year, but last month I came to the conclusion that I’ve been spreading myself a little thin on the writing front. Apart from these blog posts, I also wrote and published 15 articles on other websites, including TechBeacon, StickyMinds and LinkedIn. That’s a lot of writing, I can tell you.

Next year, I’ll probably be blogging less, in an effort to create higher quality output. I’ll also still be doing articles for other websites (I’m working on two of those as we speak). I’m aiming to publish at least one quality blog post on here each month, plus some reviews of conferences, books and other resources whenever I feel like it. That should free up some time to invest in other interesting things that I encounter.

All in all, 2017 has been a great year for me, I’ve met many interesting people and worked on a lot of interesting stuff. 2018 will hopefully be a year of spreading myself a little less thin, instead focusing more on the good stuff. As always, I’ll keep you posted.

On preventing your test suite from becoming too user interface-heavy

In August of last year, I published a blog post talking about why I don’t like to think of automation in terms of frameworks, but rather in terms of solutions. I’ve softened a little since then (this is probably a sign of me getting old..), but my belief that building a framework might lead to automation engineers subsequently trying to fit every test left, right and center into that framework still stands. One example of this phenomenon in particular I still see too often: engineers building a feature-rich end-to-end automation framework (for example using Selenium) and then automating all of their tests using that framework.

This is what I meant in the older post by ‘framework think’: because the framework has made it so easy for them to add new tests, they skip the step where they decide what would be the most efficient approach for a specific test and blindly add it to the test suite run by that very framework. This might not lead to harmful side effects in the short term, but as the test suite grows, chances are high that it becomes unwieldy, that the time it takes to complete a full test run becomes unnecessarily long and that maintenance efforts are not being outweighed by the added value of having the automated tests any more.

In this post, I’d like to take the practical approach once more and demonstrate how you can take a closer look at your application and decide if there might be a more efficient way to implement certain checks. We’re going to do this by opening up the user interface and see what happens ‘under the hood’. I’m writing this post as an addendum to my ‘Building great end-to-end tests with Selenium and Cucumber / SpecFlow‘ course, by the way. Yes, that’s right, one of the first things I talk about during my course on writing tests with Selenium is when not to do so. I firmly believe that’s the on of the very first steps towards creating a solid test suite: deciding what should not be in it.

The application under test
The application we’re going to write tests for is an online mortgage orientation tool, provided by a major Dutch online bank. I’ve removed all references to the client name, just to be sure, but it’s not like we’re dealing with sensitive data here. The orientation tool is a sequence of three forms, in which people that are interested in a mortgage fill in details about their financial situation, after which the orientation tool gives an indication of whether or not the applicant is eligible for a mortgage, as well as an estimate of the maximum amount of the mortgage, the interest rate and the monthly installments payable.

Our application under test - the mortgage orientation tool

What are we going to automate?
Now that we know what our application under test does, let’s see what we should automate. We’ll assume that there is a justified need for automated checks in the first place (otherwise this would have been a very short blog post!). We’ll also assume that, maybe for tests on some other part of the bank’s website, there is already a solid automation framework written around Selenium in place. So, this being a website and all, it makes sense to write some additional checks and incorporate them into the existing framework.

First of all, let’s try and make sure that the orientation tool can be used and completed, and that it displays a result. I’d say, that would be a good candidate for an automated test written using Selenium, since it confirms that the application is working from an end user perspective (there is value in the test) and I can’t think of a lower level test that would give me the same feedback. Since there are a couple of different paths through the orientation tool (you can apply for a mortgage alone or with someone else, some people have a house to sell while others have not, there are different types of contracts, etc.), I’d even go as far as to say you’ll need more than one Selenium-based test to be able to properly claim that all paths can be traversed by an end user.

Next, I can imagine that you’d want to make sure that the numbers that are displayed are correct, so your customers aren’t misinformed when they complete the orientation tool. This would lead to some massive issues of distrust later on in the mortgage application process, I’d assume.. Since we’ve been able to add the previous tests so easily to our existing framework, it makes sense to add some more tests that walk through the forms, add the data required to trigger a specific expected outcome and verify that the result screen we saw in the screenshot above displays the expected numbers. Right?

No. Not right.

It’s highly likely that the business logic used to perform the calculation and serve the numbers displayed on screen isn’t actually implemented in the user interface. Rather, it’s probably served up by a backend service containing the business logic and rules required to perform the calculations (and with mortgages, there are quite a few of those business rules, I’ve been told..). The user interface takes the values entered by the end user, sends them to a backend service that performs calculations and returns the values indicating mortgage eligibility, interest rate, height of monthly installment, etc., which are then interpreted and displayed again by that same user interface.

So, since the business logic that we’re verifying isn’t implemented in the user interface, why use the UI to verify it in the first place? That would highly likely only lead to unnecessarily slow tests and shallow feedback. Instead, let’s look if there’s a different hook we can use to write tests.

I tend to use on of two different tactics to find out if there are better ways to write automated tests in cases like these:

  1. Talk to a developer. They’re building the stuff, so they’ll probably know more about the architecture of your application and will likely be happy to help you out.
  2. Use a network analyzing tool such as Fiddler or WireShark. Tools like these two let you see what happens ‘under water’ when you’re using the user interface of a web application.

Normally, I’ll use a combination of both: find out more about the architecture of an application by talking to developers, then using a network analyzer (I prefer Fiddler myself) to see what API calls are triggered when I perform a certain action.

Analyzing API calls using Fiddler
So, let’s put my assumption that there’s a better way to automate the tests that will verify the calculations performed by the mortgage orientation tool to the test. To do so, I’ll fire up Fiddler and have it monitor the traffic that’s being sent back and forth between my browser and the application server while I interact with the orientation tool. Here’s what that looks like:

Traffic exchanged between client and server in our mortgage orientation tool

As you can see, there’s a mortgage orientation API with a Calculate operation that returns exactly those numbers that appear on the screen. See the number I marked in yellow? It’s right there in the application screenshot I showed previously. This shows that pretty much all that the front end does is performing calls to a backend API and presenting the data returned by it in a manner attractive to the end user. This means that it would not make sense to use the UI to verify the calculations. Instead, I’d advise you to mimic the API call (or sequence of calls) instead, as this will give you both faster and more accurate feedback.

To take things even further, I’d recommend you to dive into the application even deeper and see if the calculations can be covered with a decent set of unit tests. The easiest way to do this is to start talking to a developer and see if this is a possibility, and if they haven’t already done so. No need to maintain two different sets of automated checks that cover the same logic, and no need to cover logic that can be tested through unit tests with API-level checks..

Often, though, I find that writing tests like this at the API level hits the sweet spot between coverage, effort it takes to write the tests and speed of execution (and as a result, length of the feedback loop). This might be because I’m not too well versed in writing unit tests myself, but it has worked pretty well for me so far.

Deciding what to automate where: a heuristic
The above has just been one example where it would be better (as well as easier) to move specific checks from the UI level to the API level. But can we make some more generic statements about when to use UI-level checks and when to dive deeper?

Yes, we can. And it turns out, someone already did! In a recent blog post called ‘UI Test Heuristic: Don’t Repeat Your Paths‘, Chris McMahon talked about this exact subject, and the heuristic he presents in his blog post applies here perfectly:

  • Check that the end user can complete the mortgage orientation tools and is shown an indication of mortgage eligibility and associated figures > different paths through the user interface > user interface-level tests
  • Check that the figures served up by the mortgage orientation tool are correct > repeating the same paths multiple times, but with different sets of input data and expected output values > time to dive deeper

So, if you want to prevent your automated test suite from becoming too bloated with UI tests, this is a rule of thumb you can (and frankly, should) apply. As always, I’d love to hear what you think.