On my 2018 and my 2019

Wow, another year has flown by! And what an amazing year it has been. Now that the end of the year is coming ever closer, I’d like to look back a little on this last year and look forward to what 2019 might have in store for me.

The freelance life
2018 was my first full year freelancing under the On Test Automation label. As I’ve said in previous posts, it fits me like a glove. What I’ve been especially grateful for this year is that being a freelancer has given me the freedom to choose whatever I want to spend my time on, without having to get permission from anybody else. It has also allowed me to be there for my family whenever it’s been needed, without having to deal with sick days or annual leave budgets.

Needless to say that I’ll continue to work as a freelancer in 2019!

Client work
I’ve done consultancy on a per-hour billing basis for four different clients this year. Sometimes as part of a software development team, sometimes in an advisory role. I’ve noticed that the latter suits me far better, so that’s what I’ll try and keep doing in 2019. These roles are a little harder to get by, and they’re often not even publicly advertised, so I’ll have to make sure that people know where to find me in case they’re looking.

I’m happy to say that I’ll be starting a new project that sounds like a perfect early January with a brand new client, where I’ll advise and support several development teams with regards to their test automation efforts for 2 days per week. I’m really looking forward to that.

Training
2018 has been the year where I finally started to increase my efforts to land more training gigs. Delivering training is what I like to do best, and I hope that 2019 I will be able to reap what I have been sowing this year. In 2018, I delivered 17 training sessions (ranging from 2 hours to a full day) with 8 different clients. I am most proud of the two times I’ve been asked to deliver training abroad, allowing me to do one day of training in the UK (Manchester) and one day in Romania (Cluj).

For 2019, I hope to at least double the amount of training sessions delivered, where my ultimate goal is to be at an average of delivering 2 days of training per week (with the rest spent on consulting work, writing, and other things). To get to that amount, I’ve started collaborating with a few training providers this year, and I hope that this pays off in 2019. I am also launching a brand new training course on January 7, one that I’ve got high hopes for, so hopefully I’ll be delivering that one a couple of times too, besides my existing training offerings.

Speaking gigs
This year has been a relatively quiet year on the speaking front. That’s fine with me, because even though I am starting to like speaking more and more, I like doing training and workshops even better, so that’s where my focus has been. Still, I have done five talks this year. Three of them in the Netherlands: at the TestNet spring conference, at a company meetup and the one I am most proud of: my very first keynote talk at the Dutch Testing Day. I’ve also delivered two talks abroad: one at the atamVIE meetup in Vienna, Austria, and one at the Romanian Testing Conference.

I would like to do another couple of talks next year, because I’m slowly learning to become a better speaker and I would love to expand on that. I have one talk scheduled so far, none other than my very first international keynote at the UKStar conference in London, UK in March. I am really, really looking forward to that one!

Conferences
Speaking of conferences, it has been a relatively quiet year on that front as well. I think I’ve attended five conferences this year, four in the Netherlands (TestNet 2x, TestBash NL and the Test Automation Day) and one abroad (the Romanian Testing Conference). In all of these conferences, I’ve been a contributor, either with a talk or with a workshop (or in case of RTC, both).

Next year, I would love to attend more conferences, and not necessarily as a contributor each and every time. Also, I’d like to expand my horizon and attend one or two conferences outside of the testing community. Two conferences are in my agenda already, UKStar and TestBash Netherlands, where I’ll be delivering a brand new workshop.

Writing
I’ve been relatively inactive on the writing front this year, too. I’ve published 7 articles (5 in English, 2 in Dutch) on several websites, as well as 10 blog posts on this site, including this one. Next year, I’m planning to pick up the pen more often again, both for other web sites as well as for my own blog. It will be a matter of consciously making more time for it, as that has been lacking a bit this year.

Webinars
Finally, I’ve also done four webinars this year, and I’m planning on doing a couple more of them next year. The organisers that had to suffer from my ramblings this year were Beaufort Fairmont, Parasoft, TestCraft and CrossBrowserTesting.

So, all in all, it has been a very diverse year! Which is a good thing, but also a trap I’ve been falling in. My attention has been divided over so many different things that those that I think are really important to me (training, writing) have suffered a little. That’s a lesson I’ll definitely take with me into next year.

But first, it’s time to relax a little. We’ll see eachother again in 2019. I hope that it’s going to be an amazing year for all of you.

RESTful API testing in C# with RestSharp

Since my last blog post that involved creating tests at the API level in C#, I’ve kept looking around for a library that would fit all my needs in that area. So far, I still haven’t found anything more suitable than RestSharp. Also, I’ve found out that RestSharp is more versatile than I initially thought it was, and that’s the reason I thought it would be a good idea to dedicate a blog post specifically to this tool.

The examples I show you in this blog post use the Zippopotam.us API, a publicly accessible API that resolves a combination of a country code and a zip code to related location data. For example, when you send an HTTP GET call to

http://api.zippopotam.us/us/90210

(where ‘us’ is a path parameter representing a country code, and ‘90210’ is a path parameter representing a zip code), you’ll receive this JSON document as a response:

{
	post code: "90210",
	country: "United States",
	country abbreviation: "US",
	places: [
		{
			place name: "Beverly Hills",
			longitude: "-118.4065",
			state: "California",
			state abbreviation: "CA",
			latitude: "34.0901"
		}
	]
}

Some really basic checks
RestSharp is available as a NuGet package, which makes it really easy to add to your C# project. So, what does an API test written using RestSharp look like? Let’s say that I want to check whether the previously mentioned HTTP GET call to http://api.zippopotam.us/us/90210 returns an HTTP status code 200 OK, this is what that looks like:

[Test]
public void StatusCodeTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("nl/3825", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    // assert
    Assert.That(response.StatusCode, Is.EqualTo(HttpStatusCode.OK));
}

If I wanted to check that the content type specified in the API response header is equal to “application/json”, I could do that like this:

[Test]
public void ContentTypeTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("nl/3825", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    // assert
    Assert.That(response.ContentType, Is.EqualTo("application/json"));
}

Creating data driven tests
As you can see, creating these basic checks is quite straightforward with RestSharp. Since APIs are all about sending and receiving data, it would be good to be able to make these tests data driven. NUnit supports data driven testing through the TestCase attribute, and using that together with passing the parameters to the test method is really all that it takes to create a data driven test:

[TestCase("nl", "3825", HttpStatusCode.OK, TestName = "Check status code for NL zip code 7411")]
[TestCase("lv", "1050", HttpStatusCode.NotFound, TestName = "Check status code for LV zip code 1050")]
public void StatusCodeTest(string countryCode, string zipCode, HttpStatusCode expectedHttpStatusCode)
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest($"{countryCode}/{zipCode}", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    // assert
    Assert.That(response.StatusCode, Is.EqualTo(expectedHttpStatusCode));
}

When you run the test method above, you’ll see that it will run two tests: one that checks that the NL zip code 3825 returns HTTP 200 OK, and one that checks that the Latvian zip code 1050 returns HTTP 404 Not Found (Latvian zip codes are not yet available in the Zippopotam.us API). In case you ever wanted to add a third test case, all you need to do is add another TestCase attribute with the required parameters and you’re set.

Working with response bodies
So far, we’ve only written assertions on the HTTP status code and the content type header value for the response. But what if we wanted to perform assertions on the contents of the response body?

Technically, we could parse the JSON response and navigate through the response document tree directly, but that would result in hard to read and hard to maintain code (see for an example this post again, where I convert a specific part of the response to a JArray after navigating to it and then do a count on the number of elements in it. Since you’re working with dynamic objects, you also don’t have the added luxury of autocomplete, because there’s no way your IDE knows the structure of the JSON document you expect in a test.

Instead, I highly prefer deserializing JSON responses to actual objects, or POCOs (Plain Old C# Objects) in this case. The JSON response you’ve seen earlier in this blog post can be represented by the following LocationResponse class:

public class LocationResponse
{
    [JsonProperty("post code")]
    public string PostCode { get; set; }
    [JsonProperty("country")]
    public string Country { get; set; }
    [JsonProperty("country abbreviation")]
    public string CountryAbbreviation { get; set; }
    [JsonProperty("places")]
    public List<Place> Places { get; set; }
}

and the Place class inside looks like this:

public class Place
{
    [JsonProperty("place name")]
    public string PlaceName { get; set; }
    [JsonProperty("longitude")]
    public string Longitude { get; set; }
    [JsonProperty("state")]
    public string State { get; set; }
    [JsonProperty("state abbreviation")]
    public string StateAbbreviation { get; set; }
    [JsonProperty("latitude")]
    public string Latitude { get; set; }
}

Using the JsonProperty attribute allows me to map POCO fields to JSON document elements without names having to match exactly, which in this case is especially useful since some of the element names contain spaces, which are impossible to use in POCO field names.

Now that we have modeled our API response as a C# class, we can convert an actual response to an instance of that class using the deserializer that’s built into RestSharp. After doing so, we can refer to the contents of the response by accessing the fields of the object, which makes for far easier test creation and maintenance:

[Test]
public void CountryAbbreviationSerializationTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("us/90210", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    LocationResponse locationResponse =
        new JsonDeserializer().
        Deserialize<LocationResponse>(response);

    // assert
    Assert.That(locationResponse.CountryAbbreviation, Is.EqualTo("US"));
}

[Test]
public void StateSerializationTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("us/12345", Method.GET);

    // act
    IRestResponse response = client.Execute(request);
    LocationResponse locationResponse =
        new JsonDeserializer().
        Deserialize<LocationResponse>(response);

    // assert
    Assert.That(locationResponse.Places[0].State, Is.EqualTo("New York"));
}

So, it looks like I’ll be sticking with RestSharp for a while when it comes to my basic C# API testing needs. That is, until I’ve found a better alternative.

All the code that I’ve included in this blog post is available on my Github page. Feel free to clone this project and run it on your own machine to see if RestSharp fits your API testing needs, too.

On supporting Continuous Testing with FITR test automation (republished)

Note: this is an updated version of an earlier post I wrote in May of last year. Since then, my understanding of Continuous Testing and what it takes for automation to be a successful and valuable part of any Continuous Testing effort have changed slightly, so I thought it would be a good idea to review and republish that post.

Test automation is everywhere, nowadays. That’s probably nothing new to you.

A lot of organizations are adopting Continuous Integration and Continuous Delivery as a means of being able to develop and deliver software in ever shorter increments. Also nothing new.

To be able to effectively implement CI/CD, a lot of organizations are relying on their automated tests to help safeguard quality thresholds while increasing release speed. Again, no breaking news here.

However, automation in and by itself isn’t enough to safeguard quality in CI and CD. You’ll need to be able to do Continuous Testing (CT). Here’s how I define Continuous Testing, a definition greatly influenced by others that have been talking and writing about CT for a while:

Continuous Testing is the process that allows you to get valuable insights into the business risks associated with delivering application increments following a CI/CD approach. No matter if you’re building and deploying once a month or once a minute, CT allows you to formulate an answer to the question ‘are we happy with the level of value that this increment provides to our business / stakeholders / end users? ‘ for every increment that’s being pushed and deployed in a CI/CD approach.

It won’t come as a surprise to you that automated tests often form a significant part of an organization’s CT strategy. However, just having automated tests is not enough to be able to support CT. Apart from the fact that automation can only do so much (a topic I’ve discussed in several other blogs and articles), not every bit of automation is equally suitable to be used in a CT strategy. But how do you decide whether or not your automation can be used as part of your CT efforts? And when they can’t, what do you need to take care of to improve them?

In order to be able to leverage your automated tests successfully for supporting CT, I’ve come up with a model based on four pillars that need to be in place for all automated checks before they can become part of your CT process:

From AT to CT with FITR tests

Let’s take a quick look at each of these FITR pillars and how they are necessary when including your automation into CT.

Focused
Automated tests need to be focused to effectively support CT. ‘Focused’ has two dimensions here.

First of all, your tests should be targeted at the right application component and/or layer. It does not make sense to use a user interface-driven test to test application logic that’s exposed through an API (and subsequently presented through the user interface), for example. Similarly, it does not make sense to write API-level tests that validate the inner workings of a calculation algorithm if unit tests can provide the same level of coverage.

The second aspect of focused automated tests is that your tests should test what they can do effectively. This boils down to sticking to what your test solution and tools in it do best, and leaving the rest either to other tools or to testers, depending on what’s there to be tested. Don’t try and force your tool to do things it isn’t supposed to (here’s an example).

If your tests are unfocused, they are far more likely to be slow to run, to have high maintenance costs and to provide inaccurate or shallow feedback on application quality.

Informative
Touching upon shallow or inaccurate feedback, automated tests also need to be informative to effectively support CT. ‘Informative’ also has two separate dimensions.

Most importantly, the results produced and the feedback provided by your automated tests should allow you, or the system that’s doing the interpretation for you (such as an automated build tool), make important decisions based on that feedback. Make sure that the test results and reporting provided contain clear results, information and error messages, targeted towards the intended audience and addressing business-related risks. Keep in mind that every audience has its own requirements when it comes to this information. Developers likely want to see stack traces, whereas managers don’t. Find out what the target audience for your reporting and test results is, what their requirements are, and then cater to them as best as you can. This might mean creating more that one report (or source of information in general) for a single test run. That’s OK.

Another important aspect of informative automated tests is that it should be clear what they do (and what they don’t do), and what business risk they address. You can make your tests themselves be more informative through various means, including (but not limited to) using naming conventions, using a BDD tool such as Cucumber or SpecFlow to create living documentation for your tests, and following good programming practices to make your code better readable and maintainable.

When automated test solutions and the results they produce are not informative, valuable time is wasted analyzing shallow feedback, or gathering missing information, which evidently breaks the ‘continuous’ part of CT.

Trustworthy
When you’re relying on your automated tests to make important decisions in your CT activities, you’d better make sure they’re trustworthy. As I described in more detail in previous posts, automated tests that cannot be trusted are essentially worthless. Make sure to eliminate false positives (tests that report a failure when they shouldn’t), but also false negatives (tests that report no failure when they should).

Repeatable
The essential idea behind CT (referring to the definition I gave at the beginning of this blog post) is that you’re able to give insight into application quality and business risks on demand, which means you should be able to run your automation on demand. Especially when you’re including API-level and end-to-end tests, this is often not as easy as it sounds.

There are two main factors that can hinder the repeatability of your tests:

  • Test data. This is in my opinion one of the hardest ones to get right, especially when talking end-to-end tests. Lots of applications I see and work with have complex data models or share test data with other systems. And if you’re especially lucky, you’ll get both. A solid test data strategy should be put in place to do CT, meaning that you’ll either have to create fresh test data at the start of every test run or have the ability to restore test data before every test run. Unfortunately, both options can be quite time consuming (if at all attainable and manageable), drawing you further away from the ‘C’ in CT instead of bringing you closer to it.
  • Test environments. If your application communicates with other components, applications or systems (and pretty much all of them do nowadays), you’ll need suitable test environments for each of these dependencies. This is also easier said than done. One possible way to deal with this is by using a form of simulation, such as mocking or service virtualization. Mocks or virtual assets are under your full control, allowing you to speed up your testing efforts, or even enable them in the first place. Use simulation carefully, though, since it’s yet another moving part of your CT solution to be managed and maintained, and make sure to test against the real thing periodically for optimal results.

Having the above four pillars in place does not guarantee that you’ll be able to perform your testing as continuously as your CI/CD process requires, but it will likely give it a solid push in the right direction.