Data driven testing in C# with NUnit and RestSharp

In a previous post, I gave some examples of how to write some basic tests in C# for RESTful APIs using NUnit and the RestSharp library. In this post, I would like to extend on that a little by showing you how to make these tests data driven.

For those of you that do not know what I mean with ‘data driven’: when I want to run tests that exercise the same logic or flow in my application under test multiple times with various combinations of input values and corresponding expected outcomes, I call that data driven testing.

This is especially useful when testing RESTful APIs, since these are all about sending and receiving data as well as exposing business logic to other layers in an application architecture (such as a graphical user interface) or to other applications (consumers of the API).

As a starting point, consider these three tests, written using RestSharp and NUnit:

[TestFixture]
public class NonDataDrivenTests
{
    private const string BASE_URL = "http://api.zippopotam.us";

    [Test]
    public void RetrieveDataForUs90210_ShouldYieldBeverlyHills()
    {
        // arrange
        RestClient client = new RestClient(BASE_URL);
        RestRequest request = 
            new RestRequest("us/90210", Method.GET);

        // act
        IRestResponse response = client.Execute(request);
        LocationResponse locationResponse =
            new JsonDeserializer().
            Deserialize<LocationResponse>(response);

        // assert
        Assert.That(
            locationResponse.Places[0].PlaceName,
            Is.EqualTo("Beverly Hills")
        );
    }

    [Test]
    public void RetrieveDataForUs12345_ShouldYieldSchenectady()
    {
        // arrange
        RestClient client = new RestClient(BASE_URL);
        RestRequest request =
            new RestRequest("us/12345", Method.GET);

        // act
        IRestResponse response = client.Execute(request);
        LocationResponse locationResponse =
            new JsonDeserializer().
            Deserialize<LocationResponse>(response);

        // assert
        Assert.That(
            locationResponse.Places[0].PlaceName,
            Is.EqualTo("Schenectady")
        );
    }

    [Test]
    public void RetrieveDataForCaY1A_ShouldYieldWhiteHorse()
    {
        // arrange
        RestClient client = new RestClient(BASE_URL);
        RestRequest request = 
            new RestRequest("ca/Y1A", Method.GET);

        // act
        IRestResponse response = client.Execute(request);
        LocationResponse locationResponse =
            new JsonDeserializer().
            Deserialize<LocationResponse>(response);

        // assert
        Assert.That(
            locationResponse.Places[0].PlaceName,
            Is.EqualTo("Whitehorse")
        );
    }
}

Please note that the LocationResponse type is a custom type I defined myself, see the GitHub repository for this post for its implementation.

These tests are a good example of what I wrote about earlier: I’m invoking the same logic (retrieving location data based on a country and zip code and then verifiying the corresponding place name from the API response) three times with different sets of test data.

This quickly gets very inefficient when you add more tests / more test data combinations, resulting in a lot of duplicated code. Luckily, NUnit provides several ways to make these tests data driven. Let’s look at two of them in some more detail.

Using the [TestCase] attribute

The first way to create data driven tests is by using the [TestCase] attribute that NUnit provides. You can add multiple [TestCase] attributes for a single test method, and specify the combinations of input and expected output parameters that the test method should take.

Additionally, you can specify other characteristics for the individual test cases. One of the most useful ones is the TestName property, which can be used to provide a legible and useful name for the individual test case. This name also turns up in the reporting, so I highly advise you to take the effort to specify one.

Here’s what our code looks like when we refactor it to use the [TestCase] attribute:

[TestFixture]
public class DataDrivenUsingAttributesTests
{
    private const string BASE_URL = "http://api.zippopotam.us";

    [TestCase("us", "90210", "Beverly Hills", TestName = "Check that US zipcode 90210 yields Beverly Hills")]
    [TestCase("us", "12345", "Schenectady", TestName = "Check that US zipcode 12345 yields Schenectady")]
    [TestCase("ca", "Y1A", "Whitehorse", TestName = "Check that CA zipcode Y1A yields Whitehorse")]
    public void RetrieveDataFor_ShouldYield
        (string countryCode, string zipCode, string expectedPlaceName)
    {
        // arrange
        RestClient client = new RestClient(BASE_URL);
        RestRequest request =
            new RestRequest($"{countryCode}/{zipCode}", Method.GET);

        // act
        IRestResponse response = client.Execute(request);
        LocationResponse locationResponse =
            new JsonDeserializer().
            Deserialize<LocationResponse>(response);

        // assert
        Assert.That(
            locationResponse.Places[0].PlaceName,
            Is.EqualTo(expectedPlaceName)
        );
    }
}

Much better! We now only have to define our test logic once, and NUnit takes care of iterating over the values defined in the [TestCase] attributes:

NUnit test results for the data driven tests using [TestCase] attributes

There are some downsides to using the [TestCase] attributes, though:

  • It’s all good when you just want to run a small amount of test iterations, but when you want to / have to test for larger numbers of combinations of input and output parameters, your code quickly gets messy (on a side note, if this is the case for you, try looking into property-based testing instead of the example-based testing we’re doing here).
  • You still have to hard code your test data in your code, which might give problems with scaling and maintaining your tests in the future.

This is where the [TestCaseSource] attribute comes in.

Using the [TestCaseSource] attribute

If you want to or need to work with larger numbers of combinations of test data and/or you want to be able to specify your test data outside of your test class, then using [TestCaseSource] might be a useful option to explore.

In this approach, you specify or read the test data in a separate method, which is then passed to the original test method. NUnit will take care of iterating over the different combinations of test data returned by the method that delivers the test data.

Here’s an example of how to apply [TestCaseSource] to our tests:

[TestFixture]
public class DataDrivenUsingTestCaseSourceTests
{
    private const string BASE_URL = "http://api.zippopotam.us";

    [Test, TestCaseSource("LocationTestData")]
    public void RetrieveDataFor_ShouldYield
        (string countryCode, string zipCode, string expectedPlaceName)
    {
        // arrange
        RestClient client = new RestClient(BASE_URL);
        RestRequest request =
            new RestRequest($"{countryCode}/{zipCode}", Method.GET);

        // act
        IRestResponse response = client.Execute(request);
        LocationResponse locationResponse =
            new JsonDeserializer().
            Deserialize<LocationResponse>(response);

        // assert
        Assert.That(
            locationResponse.Places[0].PlaceName,
            Is.EqualTo(expectedPlaceName)
        );
    }

    private static IEnumerable<TestCaseData> LocationTestData()
    {
        yield return new TestCaseData("us", "90210", "Beverly Hills").
            SetName("Check that US zipcode 90210 yields Beverly Hills");
        yield return new TestCaseData("us", "12345", "Schenectady").
            SetName("Check that US zipcode 12345 yields Schenectady");
        yield return new TestCaseData("ca", "Y1A", "Whitehorse").
            SetName("Check that CA zipcode Y1A yields Whitehorse");
    }
}

In this example, we specify our test data in a separate method LocationTestData(), and then tell the test method to use that method as the test data source using the [TestDataSource] attribute, which takes as its argument the name of the test data method.

For clarity, the test data is still hard coded in the body of the LocationTestData() method, but that’s not mandatory. You could just as easily write a method that reads the test data from any external source, as long as the test data method is static and returns an object of type IEnumerable, or any object that implements this interface.

Also, since the [TestCase] and [TestCaseSource] attributes are features of NUnit, and not of RestSharp, you can apply the principles illustrated in this post to other types of tests just as well.

Beware, though, before you use them for user interface-driven testing with tools like Selenium WebDriver. Chances are that you’re falling for a classic case of ‘just because you can, doesn’t mean you should’. I find data driven testing with Selenium WebDriver to be a test code smell: if you’re going through the same screen flow multiple times, and the only variation is in the test data, there’s a high chance that there’s a more efficient way to test the same underlying business logic (for example by leveraging APIs).

Chris McMahon explains this much more eloquently in a blog post of his. I highly recommend you reading that.

For other types of testing (API, unit, …), data driven testing could be a very powerful way to make your test code better maintainable and more powerful.

All example code in this blog post can be found on this GitHub page.

On quitting Twitter and looking forward

Many of you probably have not noticed, but last weekend I deactivated my Twitter account. Here’s why.

I’ve been active on Twitter for around four years. In that time, it has proven to be a valuable tool for keeping up with industry trends, staying in touch with people I know from conferences or other events, as well as a few non-testing and IT related sources of information (mainly related to running or music).

My 'I'm leaving Twitter' announcement

Over time, though, the value I got from Twitter was slowly getting undone by two things in particular:

  1. The urge to check my feed and notifications far too often.
  2. The amount of negativity and bickering going on.

I started to notice that because of these two reasons, I became ever more restless, distracted and downright anxious, and while Twitter may not have been the only reason for that, it was definitely a large contributor.

I couldn’t write, code or create courseware for more than 10 minutes without checking my feed. My brain was often too fried in the evening to undertake anything apart from mindless TV watching, playing the odd mobile game or even more social media consumption. Not a state I want to be in, and definitely not an example I want to set for my children.

So, at first, I decided to take a Twitter break. I removed the app from my phone, I blocked access to the site on my laptop and activated Screen Time on my phone to make accessing the mobile Twitter site more of a hassle. And it worked. Up to a point.

My mind became more clear, I became less anxious. But it still felt like it wasn’t enough. There was still the anxiety, and I still kept taking the extra steps needed to check my Twitter feed on my phone. And that’s when I thought long and hard about what value I was getting from being on Twitter in the first place. After a while, I came to the realization that it simply was too little to warrant the restlessness, and that the only reasonable thing to do was to pull the plug on my account.

So that’s what I did. And it feels great.

I’m sure I’ll still stay in touch with most people in the field. Business-wise, LinkedIn was and is a much more important source of leads and gigs anyway. There are myriad other ways to keep in touch with new developments in test automation (blogs, podcasts, …). And yes, I may hear about some things a little later than I would have through Twitter. And I even may not hear about some of them altogether. But still, leaving Twitter so far turns out to be a major net positive.

I’ve got some big projects coming up in the next year or so, and I’m sure I’ll be able to do more and better work without the constant distraction and anxiety that Twitter gave me in recent times.

So, what’s up in the future? Lots! First of all, more training: this month is my first ever month when I hit my revenue target purely through in company training, and I hope there are many more to come. I’ve got a couple of conference gigs coming up as well, most notably at this point my keynote and workshop at the Agile & Automation Days in Gdańsk, Poland, as well as a couple of local speaking and workshop gigs. And I’m negotiating another big project as well, one that I hope to share more information about in a couple of months time.

Oh, and since I’m not getting any younger, I’ve set myself a mildly ambitious running-related goal as well, and I’m sure that the added headspace will contribute to me keeping focused and determined to achieve it. I’ll gladly take not being able to brag about it on Twitter in case I make it.

One immediate benefit of me not being on Twitter anymore is the fact that I seem to be able to read more. Just yesterday, I finished ‘Digital Minimalism‘ by Cal Newport, and while this book hasn’t been the reason for my account deactivation, it surely was the right read at the right moment!

Writing API tests in Python with Tavern

So far, most of the blog posts I’ve written that covered specific tools were focused on either Java or C#. Recently, though, I got a request for test automation training for a group of data science engineers, with the explicit requirement to use Python-based tools for the examples and exercises.

Since then, I’ve been slowly expanding my reading and learning to also include the Python ecosystem, and I’ve also included a couple of Python-based test automation courses in my training offerings. So far, I’m pretty impressed. There are plenty of powerful test tools available for Python, and in this post, I’d like to take a closer look at one of them, Tavern.

Tavern is an API testing framework running on top of pytest, one of the most popular Python unit testing frameworks. It offers a range of features to write and run API tests, and if there’s something you can’t do with Tavern, it claims to be easily extensible through Python or pytest hooks and features. I can’t vouch for its extensibility yet, thought, since all that I’ve been doing with Tavern so far was possible out of the box. Tavern has good documentation too, which is also nice.

Installing Tavern on your machine is easiest when done through pip, the Python package installer and manager using the command

pip install -U tavern

Tests in Tavern are written in YAML files. Now, you either love it or hate it, but it works. To get started, let’s write a test that retrieves location data for the US zip code 90210 from the Zippopotam.us API and checks whether the response HTTP status code is equal to 200. This is what that looks like in Tavern:

test_name: Get location for US zip code 90210 and check response status code

stages:
  - name: Check that HTTP status code equals 200
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      status_code: 200

As I said, Tavern runs on top of pytest. So, to run this test, we need to invoke pytest and tell it that the tests we want to run are in the YAML file we created:

As you can see, the test passes.

Another thing you might be interested in is checking values for specific response headers. Let’s check that the response content type is equal to ‘application/json’, telling the API consumer that they need to interpret the response as JSON:

test_name: Get location for US zip code 90210 and check response content type

stages:
  - name: Check that content type equals application/json
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      headers:
        content-type: application/json

Of course, you can also perform checks on the response body. Here’s an example that checks that the place name associated with the aforementioned US zip code 90210 is equal to ‘Beverly Hills’:

test_name: Get location for US zip code 90210 and check response body content

stages:
  - name: Check that place name equals Beverly Hills
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      body:
        places:
          - place name: Beverly Hills

Since APIs are all about data, you might want to repeat the same test more than once, but with different values for input parameters and expected outputs (i.e., do data driven testing). Tavern supports this too by exposing the pytest parametrize marker:

test_name: Check place name for multiple combinations of country code and zip code

marks:
  - parametrize:
      key:
        - country_code
        - zip_code
        - place_name
      vals:
        - [us, 12345, Schenectady]
        - [ca, B2A, North Sydney South Central]
        - [nl, 3825, Vathorst]

stages:
  - name: Verify place name in response body
    request:
      url: http://api.zippopotam.us/{country_code}/{zip_code}
      method: GET
    response:
      body:
        places:
          - place name: "{place_name}"

Even though we specified only a single test with a single stage, because we used the parametrize marker and supplied the test with three test data records, pytest effectively runs three tests (similar to what @DataProvider does in TestNG for Java, for example):

Tavern output for our data driven test, run from within PyCharm

So far, we have only performed GET operations to retrieve data from an API provider, so we did not need to specify any request body contents. When, as an API consumer, you want to send data to an API provider, for example when you perform a POST or a PUT operation, you can do that like this using Tavern:

test_name: Check response status code for a very simple addition API

stages:
  - name: Verify that status code equals 200 when two integers are specified
    request:
      url: http://localhost:5000/add
      json:
        first_number: 5
        second_number: 6
      method: POST
    response:
      status_code: 200

This test will POST a JSON document

{'first_number': 5, 'second_number': 6}

to the API provider running on localhost port 5000. Please note that for obvious reasons this test will fail when you run it yourself, unless you built an API or a mock that behaves in a way that makes the test pass (great exercise, different subject …).

So, that’s it for a quick introduction to Tavern. I quite like the tool for its straightforwardness. What I’m still wondering is whether working with YAML will lead to maintainability and readability issues when you’re working with larger test suites and larger request or response bodies. I’ll keep working with Tavern in my training courses for now, so a follow-up blog post might see the light of day in a while!

All examples can be found on this GitHub page.