Stop sweeping your failing tests under the RUG

Hello and welcome to this week’s rant on bad practices in test automation! Today’s serving of automation bitterness is inspired by a question I saw (and could not NOT reply to) on LinkedIn. It went something like this:

My tests are sometimes failing for no apparent reason. How can I implement a mechanism that automatically retries running the failing tests to get them to pass?

It’s questions like this that make the hairs in my neck stand on end. Instead of sweeping your test results under the RUG (Retry Until Green), how about diving into your tests and fixing the root cause of the failure?

First of all, there is ALWAYS a reason your test fails. It might be the application under test (your test did its work, congratulations!), but it might just as well be your test itself that’s causing the failure. The fact that the reason for the failure is not apparent does not mean you can simply ignore it and try running your test a second time to see if it passes then. No, it means there’s work for you to do. It might not be fun work: dealing with and catching with all kinds of exceptions that can be thrown by a Selenium test can be very tricky. The task also might not be suitable for you: maybe you’re inexperienced and therefore think ‘forget debugging, I’ll just retry the test, that’s way easier’. That’s OK, we’ve all been inexperienced at some point in our career. In a lot of ways, most of us still are. And I myself have not exactly been innocent of this behavior in the past either.

But at some point in time, it’s time to get over complaining about flaky tests and doing something about it. That means diving deep into your tests, how they interact with your application under test, getting to the root cause of the error or exception being thrown and fixing it, for once and for all. Here’s a real world example from a project I’m currently working on.

In my tests, I need to fill out a form to create a new savings account. Because the application needs to be sure that all information entered is valid, there’s a lot of front-end input validation going on (zip code needs to exist, email address should be formatted correctly, etc.). Whenever the application is busy validating or processing input, a modal appears that indicates to the end user that the site is busy processing input, and that therefore you should wait a little before proceeding. Sounds like a good idea, right? However, when you want your tests to fill in these forms automatically, you’ll sometimes run into the issue that you’re trying to click a button or complete a text field while it is being blocked by the modal. Cue WebDriverException (“other object would receive the click”) and failing test.

Now, there are two ways to deal with this:

  1. Sweep the test result under the RUG and retry until that pesky modal does not block your script from completing, or
  2. Catch the WebDriverException, wait until the modal is no longer there and do your click or sendKeys again. Writing wrappers around the Selenium API calls is a great way of achieving this, by the way.

Option 1. is the easy way. Option 2. is the right way. You choose. Just know that every failing test is trying to tell you something. Most of the time, it’s telling you to write a better test.

One more argument in favour of NOT sweeping your flaky tests under the RUG, but preventing them from happening in the future: some day, your organization might start, you know, actually relying on these test results. For example as part of a go / no go decision for deployment into production. If I were to call the shots, I’d make sure that all my tests that I rely on for making that decision were:

Really, it’s time to quit tolerating flaky tests. Repair them or throw them away, because what’s the added value of an unreliable test?. Just don’t sweep your failing tests under the RUG.

Utterly unemployable, or another update on crafting my career

I can’t believe we’re almost halfway through April already.. With 2017 well on its way, I thought it would be a good time for another post on the way I’m trying to craft my career and build my ideal ‘job’ (that’s intentionally put between quotes). As you might have read in previous posts I wrote on this topic, I’m working hard to move away from the 40 hour per week, 9 to 5 model that’s all too prevalent in the IT consultancy world.

I’m writing this post because I see another trend in the projects I’m taking on. Whereas earlier I would join an organization temporarily as part of an Agile team and take on all kinds of tasks related to testing and test automation, I’m more and more working on shorter term projects now, where clients ask me to build an initial version of a test automation solution for them and educate them in extending and maintaining it.

Not coincidentally, this is exactly the type of project for someone who gets bored as quickly as I do. A typical project nowadays spans between two weeks and two to three months and looks somewhat like this:

  1. Client indicates that help is needed in setting up or improving their test automation solution.
  2. I discuss with and interview client stakeholders to get the questions behind the question answered (what is it that you want to test in an automated manner? Does that make sense? What’s the reason previous attempts didn’t work?). This is probably the most important stage of the project! Failing to ask the right questions, or not getting the answers you need increases the risk of a suboptimal (or useless) ‘solution’ afterwards.
  3. I start building the solution, asking for feedback and demoing a couple of times per week, with the frequenct depending on the number of days I have for the project and the number of days per week I can spend on the project.
  4. I organize a workshop where the engineers that will be working with the solution after I have left the building spend some time writing new tests and maintaining existing ones. This gives me feedback on whether what I’ve built for them works. It also gives the client feedback on whether the solution is right for them.
  5. After gathering feedback, I’ll either wrap up and move on or do a little more work improving the solution where needed. This rework should be minimal due to the early feedback I get from interviews and discussions with stakeholders.

After my time with the client ends, I’ll make an effort to regularly follow up and see whether the solution is still to their liking, or if there are any problems with it. This is also an opportunity to hear about cool improvements that engineers made (and that I can steal for future projects)!

Next to this consulting work, I’m spending an increasing amount of time writing blog posts and articles for tech websites (and the occasional magazine). You might have seen the list of articles that have been published on the articles page. As you can see, it’s steadily growing, and at the moment, I’ve got at least four more articles lined up for the year, a number that’ll surely increase as 2017 proceeds.

Another thing I’ve noticed is that my work is slowly but steadily getting more and more international. This doesn’t mean I’m travelling the world consulting and speaking (at least not yet), but recently, I’ve been discussing options for collaboration with people and organizations abroad. These opportunities vary from writing, to taking part in webinars all the way to (remote) consulting projects. Not all of them have come through, and with a new home and two small children I’m not exactly in the position to travel that much right now, but I’m nurturing these relationships nonetheless, since you never know where they will lead you..

Currently I’m doing a trip in May for the Romanian Testing Conference, where I’ll host a REST Assured workshop and will attend the conference itself, and I’m looking at another trip somewhere in the fall. Not sure where I’ll be bound, but there are some opportunities that can definitely lead to something. And I’m always keeping my eyes and ears open for opportunities I just can’t say ‘no’ to..

I’m starting to love the ‘job’ (there are the quotes again) I’m slowly crafting this way. It gives me the opportunity to say ‘yes’ to the things I want to do, and to say ‘no’ if something isn’t interesting enough or if I don’t have the time. Although I’m still struggling with that last bit, there’s just too much cool stuff to do! I’m not sure how my career and my working days will look like in five years, so don’t ask. I hate that question anyway. I AM thinking about the future though, and about whether it will be as easy for me to do the things I love if a) I get older and/or b) the market for test automation slows down or comes to a halt.

I’m also noticing that I’m growing increasingly unemployable, in the sense that I can’t see myself working for a boss or manager anytime soon. The prospect of having to deal with end-of-year reviews, billable hour targets and having to say ‘no’ to something I want to do yet might not be in the best interest or directly profitable to the organization makes me never want to return to that anymore. I hope I’ll never have to, ever again. But I don’t worry about that yet, because it’s quite hard NOT to have a freelance project (or three) at the moment. Real first world problem right there!

The message behind all of the above is that in testing and test automation too, there IS a way other than spending 40 hours per week on a given project for months (or years) on end. I’m not saying that there’s anything wrong with doing so, but it doesn’t work for me any more, and I’m pretty sure I’m not alone. It DOES take time, effort and above all perseverance, though, to get where you want to be. Whenever I tell someone about an email I received through this blog, asking me to collaborate on a cool project, what they don’t (or even don’t want to) see is all the work I’m putting in writing and publishing a blog post. Every. Single. Week. That takes time. That takes effort. And above all, that takes perseverance. But only by persevering and putting out (hopefully quality) content habitually and sharing your knowledge, expertise and experiences (for free, mostly) will you start getting noticed. And maybe, after a couple of years, there will be some paid spin-off work. It’s worth it. It just doesn’t happen overnight, though.

Extending my solution with API testing capabilities, and troubles with open source projects

In last week’s blog post, I introduced how I would approach creating a solution for creating and executing automated user interface-driven tests. In this blog post, I’m going to extend the capabilities of my solution by adding automated tests that exercise a RESTful API.

Returning readers might know that I’m a big fan of REST Assured. However, since that’s a Java-based DSL, and my solution is written in C#, I can’t use REST Assured. I’ve spent some time looking at alternatives and decided upon using RestSharp for this example. Why RestSharp? Because it has a clean and fairly readable API, which makes it easy to use, even for non-wizardlike programmers, such as yours truly. This is a big plus for me when creating test automation solutions, because, as a consultant, there will always come a moment where you need to hand over your solution to somebody else. And that somebody might be just as inexperienced when it comes to programming as myself, so I think it’s important to use tools that are straightforward and easy to use while still powerful enough to perform the required tasks. RestSharp ticks those boxes fairly well.

A sample feature and scenario for a RESTful API
Again, we start with the top level of our test implementation: the feature and scenario(s) that describe the required behaviour. Here goes:

Feature: CircuitsApi
	In order to impress my friends
	As a Formula 1 fan
	I want to know the number of races for a given Formula 1 season

@api
Scenario Outline: Check the number of races in a season
	Given I want to know the number of Formula One races in <season>
	When I retrieve the circuit list for that season
	Then there should be <numberOfCircuits> circuits in the list returned
	Examples:
	| season | numberOfCircuits |
	| 2017   | 20               |
	| 2016   | 21               |
	| 1966   | 9                |
	| 1950   | 8                |

Note that I’ve added a tag @api to the scenario. This is so I can prevent my solution from starting a browser for these API tests as well (which would just slow down test execution) by writing dedicated setup and teardown methods that execute only for scenarios with a certain tag. This can be done real easy with SpecFlow. See the GitHub page for the solution for more details.

The step definitions
So, how are the above scenario steps implemented? In the Given step, I handle creating the RestClient that is used to send the HTTP request and intercept the response, as well as setting the path parameter specifying the specific year for which I want to check the number of races:

private RestClient restClient;
private RestRequest restRequest;
private IRestResponse restResponse;

[Given(@"I want to know the number of Formula One races in (.*)")]
public void GivenIWantToKnowTheNumberOfFormulaOneRacesIn(string season)
{
    restClient = new RestClient(Constants.ApiBaseUrl); //http://ergast.com/api/f1

    restRequest = new RestRequest("{season}/circuits.json", Method.GET);

    restRequest.AddUrlSegment("season", season);
}

The When step is even more straightforward: all is done here is executing the RestClient and storing the response in the IRestResponse:

[When(@"I retrieve the circuit list for that season")]
public void WhenIRetrieveTheCircuitListForThatSeason()
{
    restResponse = restClient.Execute(restRequest);
}

Finally, in the Then step, we parse the response to get the JSON field value we’re interested in and check whether it matches the expected value. In this case, we’re not interested in a field value, though, but rather in the number of times a field appears in the response (in this case, the length of the array of circuits). And, obviously, we want to report the result of our check to the ExtentReports report we’re creating during test execution:

[Then(@"there should be (.*) circuits in the list returned")]
public void ThenThereShouldBeCircuitsInTheListReturned(int numberOfSeasons)
{
    dynamic jsonResponse = JsonConvert.DeserializeObject(restResponse.Content);

    JArray circuitsArray = jsonResponse.MRData.CircuitTable.Circuits;

    OTAAssert.AssertEquals(null, test, circuitsArray.Count, numberOfSeasons, "The actual number of circuits in the list is equal to the expected value " + numberOfSeasons.ToString());
}

Basically, what we’re doing here is deserializing the JSON response and storing it into a dynamic object. I wasn’t familiar with the dynamic concept before, but it turns out to be very useful here. The dynamic type can be used for objects of which you don’t know the structure until runtime, which holds true here (we don’t know what the JSON response looks like). Then, we can simply traverse the dynamic jsonResponse until we get to the field we need for our check. It may not be the best or most reusable solution, but it definitely shows the power of the C# language here.

The trouble with RestSharp
As you can see, with RestSharp, it’s really easy to write tests for RESTful APIs and add them to our solution. There’s one problem though, and that’s that RestSharp no longer seems to be actively maintained. The most recent version of RestSharp was released on August 26 of 2015, more than a year and a half ago. There’s no response to the issues posted on GitHub, either, which also doesn’t bode very well for the liveliness of the project. For me, when deciding whether or not to use an open source project, this is a big red flag.

One alternative to RestSharp I found was RestAssured.Net. This project looks like an effort to port the original REST Assured to C#. It looks useful enough, however, it suffers from the same problem that RestSharp does: no activity. For me, that’s enough reason to discard it.

Just before writing this post, I was made aware of yet another possible solution, called Flurl. This does look like a promising alternative, but unfortunately I didn’t have the time to try it out for myself before the due date of this blog post. I’ll check it out during the week and if it lives up to its promising appearance, Flurl stands a good chance of being the topic for next week’s blog post. Until then, you can find my RestSharp implementation of the RESTful API tests on the GitHub page of my solution.