Writing API tests in Python with Tavern

So far, most of the blog posts I’ve written that covered specific tools were focused on either Java or C#. Recently, though, I got a request for test automation training for a group of data science engineers, with the explicit requirement to use Python-based tools for the examples and exercises.

Since then, I’ve been slowly expanding my reading and learning to also include the Python ecosystem, and I’ve also included a couple of Python-based test automation courses in my training offerings. So far, I’m pretty impressed. There are plenty of powerful test tools available for Python, and in this post, I’d like to take a closer look at one of them, Tavern.

Tavern is an API testing framework running on top of pytest, one of the most popular Python unit testing frameworks. It offers a range of features to write and run API tests, and if there’s something you can’t do with Tavern, it claims to be easily extensible through Python or pytest hooks and features. I can’t vouch for its extensibility yet, thought, since all that I’ve been doing with Tavern so far was possible out of the box. Tavern has good documentation too, which is also nice.

Installing Tavern on your machine is easiest when done through pip, the Python package installer and manager using the command

pip install -U tavern

Tests in Tavern are written in YAML files. Now, you either love it or hate it, but it works. To get started, let’s write a test that retrieves location data for the US zip code 90210 from the Zippopotam.us API and checks whether the response HTTP status code is equal to 200. This is what that looks like in Tavern:

test_name: Get location for US zip code 90210 and check response status code

stages:
  - name: Check that HTTP status code equals 200
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      status_code: 200

As I said, Tavern runs on top of pytest. So, to run this test, we need to invoke pytest and tell it that the tests we want to run are in the YAML file we created:

As you can see, the test passes.

Another thing you might be interested in is checking values for specific response headers. Let’s check that the response content type is equal to ‘application/json’, telling the API consumer that they need to interpret the response as JSON:

test_name: Get location for US zip code 90210 and check response content type

stages:
  - name: Check that content type equals application/json
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      headers:
        content-type: application/json

Of course, you can also perform checks on the response body. Here’s an example that checks that the place name associated with the aforementioned US zip code 90210 is equal to ‘Beverly Hills’:

test_name: Get location for US zip code 90210 and check response body content

stages:
  - name: Check that place name equals Beverly Hills
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      body:
        places:
          - place name: Beverly Hills

Since APIs are all about data, you might want to repeat the same test more than once, but with different values for input parameters and expected outputs (i.e., do data driven testing). Tavern supports this too by exposing the pytest parametrize marker:

test_name: Check place name for multiple combinations of country code and zip code

marks:
  - parametrize:
      key:
        - country_code
        - zip_code
        - place_name
      vals:
        - [us, 12345, Schenectady]
        - [ca, B2A, North Sydney South Central]
        - [nl, 3825, Vathorst]

stages:
  - name: Verify place name in response body
    request:
      url: http://api.zippopotam.us/{country_code}/{zip_code}
      method: GET
    response:
      body:
        places:
          - place name: "{place_name}"

Even though we specified only a single test with a single stage, because we used the parametrize marker and supplied the test with three test data records, pytest effectively runs three tests (similar to what @DataProvider does in TestNG for Java, for example):

Tavern output for our data driven test, run from within PyCharm

So far, we have only performed GET operations to retrieve data from an API provider, so we did not need to specify any request body contents. When, as an API consumer, you want to send data to an API provider, for example when you perform a POST or a PUT operation, you can do that like this using Tavern:

test_name: Check response status code for a very simple addition API

stages:
  - name: Verify that status code equals 200 when two integers are specified
    request:
      url: http://localhost:5000/add
      json:
        first_number: 5
        second_number: 6
      method: POST
    response:
      status_code: 200

This test will POST a JSON document

{'first_number': 5, 'second_number': 6}

to the API provider running on localhost port 5000. Please note that for obvious reasons this test will fail when you run it yourself, unless you built an API or a mock that behaves in a way that makes the test pass (great exercise, different subject …).

So, that’s it for a quick introduction to Tavern. I quite like the tool for its straightforwardness. What I’m still wondering is whether working with YAML will lead to maintainability and readability issues when you’re working with larger test suites and larger request or response bodies. I’ll keep working with Tavern in my training courses for now, so a follow-up blog post might see the light of day in a while!

All examples can be found on this GitHub page.

Why test automation is a lot like bubble wrap

So, a couple of weeks ago I had the pleasure of delivering a keynote at the 2019 UKStar conference in London, where I talked about how asking better questions (one tip: ask why first, ask how later) can help teams and organizations prevent ‘automation for automation’s sake’ and increase their chances of test automation actually becoming a valuable asset in the overall software development and testing life cycle.

In this talk, I used an analogy comparing test automation to bubble wrap, in an attempt to help people see test automation in a slightly different light than the ‘be all end all’ solution to all testing problems that it’s still perceived as too often. This analogy sparked a couple of mentions and questions for clarification on Twitter afterwards, so I thought it would be a good idea to repeat and expand on it in this blog post.

Street wrapped in bubble wrap

So, why do I think that test automation is similar to bubble wrap?

It has little value on its own
You might not tell this from the incredible amounts of time and money that organizations spend on test automation, but in itself, automated test scripts have very little value. Just like buying a roll of bubble wrap doesn’t set you back a whole lot of money (I’ve found a roll 1 meter wide and 100 meters long for under 40 euros), nobody’s going to wake up in the morning with the plan of spending a lot of money to buy automated tests. But why are organizations still investing so much in it then? That’s because…

It’s used to ship another product of much higher value safely to its destination
The value of both bubble wrap and test automation is instead in what they provide (when applied well, of course): safety. Just like inexpensive bubble wrap can be used to ship expensive products (china vases, for example) safely to the other side of the world, the main purpose of test automation is to enable teams to ship a software product that does provide value to its destination (or at least bring it a step closer): production.

There’s often too much of it in the package
I don’t know about you, but I order a lot of my shopping online, and all too often, the delivery person presents me with a large box that’s filled more than half with bubble wrap (or those fancy air-filled bags). Similarly, software development teams still too often spend a huge amount of time on writing lots of test automation. Why? Because all those green check marks give them a feeling of safety. Everybody feels good when you tell them that you’ve added 25 automated tests to the suite. Far fewer people, however, make a habit of checking if those tests actually serve a purpose…

It doesn’t protect your product against everything that could go wrong with it
Bubble wrap might protect your product from breaking when it falls. However, it doesn’t protect you against theft, or your package getting lost in the mail. Similarly, test automation doesn’t protect your software against all types of risks. It might protect you against some risks.

I cannot make this point without referring you to the example that Alex Schladebeck gave in a recent TestTalks podcast episode:

Outtake from the interview with Alex Schladebeck on TestTalks

I’m referring to the same principle here, although Alex put it much more eloquently than I do.

Oh, and finally…

It’s a lot of fun to play with!
No further comment necessary 🙂

Announcing my API testing masterclass

Do you want to learn more about APIs and how to test them? Have you been looking for a comprehensive course that teaches you everything there is to know about testing APIs and testing software systems at the API level? If so, you might want to read on!

Many API testing courses out there focus on a specific tool or technique that you can leverage to improve your API testing efforts. While those courses definitely have their use, I feel there’s much more to it if you really want to become well-versed in testing APIs and testing systems at the API level.

That’s the reason I created a brand new, three day masterclass that will teach you, among other things, about:

  • APIs and their role in modern software systems
  • Why to test APIs, and why to test at the API level
  • What to look for when testing APIs
  • Exploring APIs for testing purposes
  • Using tools for API test automation
  • API performance and security testing
  • API specifications and contract testing
  • Mocking APIs for testing purposes

A much more detailed description of this API testing masterclass, including a day-to-day breakdown of course contents and learning goals, can be found on the course page.

Need to convince your manager?
Please send them this course flyer, highlighting all of the benefits and providing a summary of the training course, all neatly on a single page. And don’t forget to bring the Early Bird discount to their attention!

I’m looking forward to many successful deliveries of this API testing masterclass, and I hope to see you all at one of them.

Some words of thanks
I owe a lot of thank you’s to Maria Kedemo of Black Koi Consulting for putting the idea for this masterclass in my head at exactly the right time, as well as for further discussing it and for reviewing the content outline as it can be found on the course page.

I am also very grateful for the content outline review and valuable comments made by Elizabeth Zagroba, Angie Jones and Joe Colantonio.