Using JsonPath and XmlPath in REST Assured

While preparing my REST Assured workshop for the Romanian Testing Conference next month, I ran into a subject I feel I didn’t cover enough in the previous times I hosted the workshop: how to effectively use JsonPath and XmlPath to extract specific elements and element groups in RESTful API responses before verifying them. Here are a couple of tricks I learned since and worked into the exercises that’ll be part of the workshop from now on.

The examples in this post are all based on the following XML response:

<?xml version="1.0" encoding="UTF-8" ?>
<cars>
	<car make="Alfa Romeo" model="Giulia">
		<country>Italy</country>
		<year>2016</year>
	</car>
	<car make="Aston Martin" model="DB11">
		<country>UK</country>
		<year>1949</year>
	</car>
	<car make="Toyota" model="Auris">
		<country>Japan</country>
		<year>2012</year>
	</car>
</cars>

The syntax for JsonPath is very, very similar, except for the obvious lack of support for attributes in JsonPath (JSON does not have attributes).

Extracting a single element based on its index
Let’s get started with an easy example. Say I want to check that the first car in the list is made in Italy. To do this, we can simply traverse the XML tree until we get to the right element, using the index [0] to select the first car in the list:

@Test
public void checkCountryForFirstCar() {
						
	given().
	when().
		get("http://path.to/cars").
	then().
		assertThat().
		body("cars.car[0].country", equalTo("Italy"));
}

Similarly, we can check that the last car came on the market in 2012, using the [-1] index (this points us to the last item in a list):

@Test
public void checkYearForLastCar() {
						
	given().
	when().
		get("http://path.to/cars").
	then().
		assertThat().
		body("cars.car[-1].year", equalTo("2012"));
}

Extracting an attribute value
Just as easily, you can extract and check the value of an attribute in an XML document. If we want to check that the model of the second car in the list is ‘DB11’, we can do so using the ‘@’ notation:

@Test
public void checkModelForSecondCar() {
						
	given().
	when().
		get("http://path.to/cars").
	then().
		assertThat().
		body("cars.car[1].@model", equalTo("DB11"));
}

Counting the number of occurrences of a specific value
Now for something a little more complex: let’s assume we want to check that there’s only one car in the list that is made in Japan. To do this, we’ll need to apply a findAll filter to the country element, and subsequently count the number of items in the list using size():

@Test
public void checkThereIsOneJapaneseCar() {
		
	given().
	when().
		get("http://path.to/cars").
	then().
		assertThat().
		body("cars.car.findAll{it.country=='Japan'}.size()", equalTo(1));
}

Likewise, we can also check that there are two cars that are made either in Italy or in the UK, using the in operator:

@Test
public void checkThereAreTwoCarsThatAreMadeEitherInItalyOrInTheUK() {
		
	given().
	when().
		get("http://path.to/cars").
	then().
		assertThat().
		body("cars.car.findAll{it.country in ['Italy','UK']}.size()", equalTo(2));
}

Performing a search for a specific string of characters
Finally, instead of looking for exact attribute or element value matches, we can also filter on substrings. This is done using the grep() method (very similar to the Unix command). If we want to check the number of cars in the list whose make starts with an ‘A’, we can do so like this:

@Test
public void checkThereAreTwoCarsWhoseMakeStartsWithAnA() {
		
	given().
	when().
		get("http://localhost:9876/xml/cars").
	then().
		assertThat().
		body("cars.car.@make.grep(~/A.*/).size()", equalTo(2));
}

If you know of more examples, or if I missed another example of how to use JsonPath / XmlPath, do let me know!

Stop sweeping your failing tests under the RUG

Hello and welcome to this week’s rant on bad practices in test automation! Today’s serving of automation bitterness is inspired by a question I saw (and could not NOT reply to) on LinkedIn. It went something like this:

My tests are sometimes failing for no apparent reason. How can I implement a mechanism that automatically retries running the failing tests to get them to pass?

It’s questions like this that make the hairs in my neck stand on end. Instead of sweeping your test results under the RUG (Retry Until Green), how about diving into your tests and fixing the root cause of the failure?

First of all, there is ALWAYS a reason your test fails. It might be the application under test (your test did its work, congratulations!), but it might just as well be your test itself that’s causing the failure. The fact that the reason for the failure is not apparent does not mean you can simply ignore it and try running your test a second time to see if it passes then. No, it means there’s work for you to do. It might not be fun work: dealing with and catching with all kinds of exceptions that can be thrown by a Selenium test can be very tricky. The task also might not be suitable for you: maybe you’re inexperienced and therefore think ‘forget debugging, I’ll just retry the test, that’s way easier’. That’s OK, we’ve all been inexperienced at some point in our career. In a lot of ways, most of us still are. And I myself have not exactly been innocent of this behavior in the past either.

But at some point in time, it’s time to get over complaining about flaky tests and doing something about it. That means diving deep into your tests, how they interact with your application under test, getting to the root cause of the error or exception being thrown and fixing it, for once and for all. Here’s a real world example from a project I’m currently working on.

In my tests, I need to fill out a form to create a new savings account. Because the application needs to be sure that all information entered is valid, there’s a lot of front-end input validation going on (zip code needs to exist, email address should be formatted correctly, etc.). Whenever the application is busy validating or processing input, a modal appears that indicates to the end user that the site is busy processing input, and that therefore you should wait a little before proceeding. Sounds like a good idea, right? However, when you want your tests to fill in these forms automatically, you’ll sometimes run into the issue that you’re trying to click a button or complete a text field while it is being blocked by the modal. Cue WebDriverException (“other object would receive the click”) and failing test.

Now, there are two ways to deal with this:

  1. Sweep the test result under the RUG and retry until that pesky modal does not block your script from completing, or
  2. Catch the WebDriverException, wait until the modal is no longer there and do your click or sendKeys again. Writing wrappers around the Selenium API calls is a great way of achieving this, by the way.

Option 1. is the easy way. Option 2. is the right way. You choose. Just know that every failing test is trying to tell you something. Most of the time, it’s telling you to write a better test.

One more argument in favour of NOT sweeping your flaky tests under the RUG, but preventing them from happening in the future: some day, your organization might start, you know, actually relying on these test results. For example as part of a go / no go decision for deployment into production. If I were to call the shots, I’d make sure that all my tests that I rely on for making that decision were:

Really, it’s time to quit tolerating flaky tests. Repair them or throw them away, because what’s the added value of an unreliable test?. Just don’t sweep your failing tests under the RUG.

Utterly unemployable, or another update on crafting my career

I can’t believe we’re almost halfway through April already.. With 2017 well on its way, I thought it would be a good time for another post on the way I’m trying to craft my career and build my ideal ‘job’ (that’s intentionally put between quotes). As you might have read in previous posts I wrote on this topic, I’m working hard to move away from the 40 hour per week, 9 to 5 model that’s all too prevalent in the IT consultancy world.

I’m writing this post because I see another trend in the projects I’m taking on. Whereas earlier I would join an organization temporarily as part of an Agile team and take on all kinds of tasks related to testing and test automation, I’m more and more working on shorter term projects now, where clients ask me to build an initial version of a test automation solution for them and educate them in extending and maintaining it.

Not coincidentally, this is exactly the type of project for someone who gets bored as quickly as I do. A typical project nowadays spans between two weeks and two to three months and looks somewhat like this:

  1. Client indicates that help is needed in setting up or improving their test automation solution.
  2. I discuss with and interview client stakeholders to get the questions behind the question answered (what is it that you want to test in an automated manner? Does that make sense? What’s the reason previous attempts didn’t work?). This is probably the most important stage of the project! Failing to ask the right questions, or not getting the answers you need increases the risk of a suboptimal (or useless) ‘solution’ afterwards.
  3. I start building the solution, asking for feedback and demoing a couple of times per week, with the frequenct depending on the number of days I have for the project and the number of days per week I can spend on the project.
  4. I organize a workshop where the engineers that will be working with the solution after I have left the building spend some time writing new tests and maintaining existing ones. This gives me feedback on whether what I’ve built for them works. It also gives the client feedback on whether the solution is right for them.
  5. After gathering feedback, I’ll either wrap up and move on or do a little more work improving the solution where needed. This rework should be minimal due to the early feedback I get from interviews and discussions with stakeholders.

After my time with the client ends, I’ll make an effort to regularly follow up and see whether the solution is still to their liking, or if there are any problems with it. This is also an opportunity to hear about cool improvements that engineers made (and that I can steal for future projects)!

Next to this consulting work, I’m spending an increasing amount of time writing blog posts and articles for tech websites (and the occasional magazine). You might have seen the list of articles that have been published on the articles page. As you can see, it’s steadily growing, and at the moment, I’ve got at least four more articles lined up for the year, a number that’ll surely increase as 2017 proceeds.

Another thing I’ve noticed is that my work is slowly but steadily getting more and more international. This doesn’t mean I’m travelling the world consulting and speaking (at least not yet), but recently, I’ve been discussing options for collaboration with people and organizations abroad. These opportunities vary from writing, to taking part in webinars all the way to (remote) consulting projects. Not all of them have come through, and with a new home and two small children I’m not exactly in the position to travel that much right now, but I’m nurturing these relationships nonetheless, since you never know where they will lead you..

Currently I’m doing a trip in May for the Romanian Testing Conference, where I’ll host a REST Assured workshop and will attend the conference itself, and I’m looking at another trip somewhere in the fall. Not sure where I’ll be bound, but there are some opportunities that can definitely lead to something. And I’m always keeping my eyes and ears open for opportunities I just can’t say ‘no’ to..

I’m starting to love the ‘job’ (there are the quotes again) I’m slowly crafting this way. It gives me the opportunity to say ‘yes’ to the things I want to do, and to say ‘no’ if something isn’t interesting enough or if I don’t have the time. Although I’m still struggling with that last bit, there’s just too much cool stuff to do! I’m not sure how my career and my working days will look like in five years, so don’t ask. I hate that question anyway. I AM thinking about the future though, and about whether it will be as easy for me to do the things I love if a) I get older and/or b) the market for test automation slows down or comes to a halt.

I’m also noticing that I’m growing increasingly unemployable, in the sense that I can’t see myself working for a boss or manager anytime soon. The prospect of having to deal with end-of-year reviews, billable hour targets and having to say ‘no’ to something I want to do yet might not be in the best interest or directly profitable to the organization makes me never want to return to that anymore. I hope I’ll never have to, ever again. But I don’t worry about that yet, because it’s quite hard NOT to have a freelance project (or three) at the moment. Real first world problem right there!

The message behind all of the above is that in testing and test automation too, there IS a way other than spending 40 hours per week on a given project for months (or years) on end. I’m not saying that there’s anything wrong with doing so, but it doesn’t work for me any more, and I’m pretty sure I’m not alone. It DOES take time, effort and above all perseverance, though, to get where you want to be. Whenever I tell someone about an email I received through this blog, asking me to collaborate on a cool project, what they don’t (or even don’t want to) see is all the work I’m putting in writing and publishing a blog post. Every. Single. Week. That takes time. That takes effort. And above all, that takes perseverance. But only by persevering and putting out (hopefully quality) content habitually and sharing your knowledge, expertise and experiences (for free, mostly) will you start getting noticed. And maybe, after a couple of years, there will be some paid spin-off work. It’s worth it. It just doesn’t happen overnight, though.