2019 – a year in review

2019 is almost at an end and wow, what a year it has been. A lot of ups, some downs, but a major net positive overall. As I did last year, with this post I’d like to reflect a little on all that I’ve been working on this year, and share with you some of my plans for 2020, too.

Training
As I said last year, my main goal for 2019 was to extend my training efforts. I am pretty happy with how this turned out, overall. While the year started a little slow, it took off around May, with a couple of downright crazy weeks around October and November. In total, I delivered 30 full days of in company training and another 19 half day and evening training sessions, with 15 different clients. Collaborating with training companies helped a lot in getting to this result, and I’m looking forward to continuing working with all of them next year.

Most of the training I delivered featured writing automation in Java, with C# coming in second, Python at #3 and here and there some JavaScript, too.

Adding to that, I have done two full day conference workshops (one at the Agile & Automation Days in Poland, the other at TestBash Netherlands) and three half-day workshops (at the TestNet and Test Automation Days conferences and one at a meetup).

The highlight of this year with regards to training was probably my trip to the UK in November to deliver two full days of in-company API testing and automation training. One thing I need to work on next year is finding a little more balance in the busy and the slow weeks and months by building a steadier pipeline of training work.

Consulting
I worked with two consulting clients this year. The one I spent the most time with was an on site gig here in the Netherlands, where I was (and still am) tasked with coaching a number of testers (and entire development teams) with the implementation of test automation. It’s a really interesting and fulfilling gig that will continue at least in the first couple of months of next year, and I’m really happy with the results we’ve booked and the progress we have made.

Since October, I’m also doing some remote consulting with (and writing for) a consultancy firm in the United States, and so far this has been a really interesting and rewarding gig, too. I’d love to build on this relationship in 2020 and maybe find some other remote consulting clients, too. The idea of literally being able to work from anywhere, for organizations anywhere on this planet, without having to commute, is something that really appeals to me. So, if you are or know of an organization that could use some advice or consulting in the area of test automation, contact me, I’d love to talk and see if I can help you in 2020.

Writing
2019 has been a pretty active year in writing for me, too. I have written and published 10 articles on various industry blogs and websites, and another 8 blog posts (including this one) on this site. I will continue writing next year, as I think it’s still a great way to process and structure my thoughts, as well as a good excuse to learn new things myself.

Public speaking
This year, I’ve done 9 talks, mostly at meetups and conferences, but also one in-house with a client. Six of these were regular talks, one was a deep dive with some live coding, but most notably I have done two international keynotes, one at UKStar (London, in March) and the other at the Agile & Automation Days (Poland, in October). I really enjoyed both these talks and have received some good and constructive feedback on them.

Even though I’ll mostly be focusing on doing workshops at conferences and meetups (since that’s what I like to do best), I hope to be able to do a couple of talks next year, too. I’ve got one conference planned so far (in June) and hope to add a couple more to the agenda as 2020 progresses.

Other activities
Apart from all that I mentioned above, I’ve done one webinar this year (with Parasoft), appeared as a guest on a podcast (with the fine people that host de Voorproeverij) and had my first online course published with Test Automation University. I’m looking forward to seeing what opportunities 2020 will bring me.

The freelance life
No surprises here: I thoroughly enjoyed working as a freelancer this year, and I’m even more convinced that this is the most ideal way of working for me, at least for the next couple of years. The total freedom of going where I want to go and working on what I want to work on has been treating me very well again this year. It has also given me the opportunity to be there for my family when that was needed, without having to go through hoops or having to account for fewer hours or days worked. I’m very much looking forward to another year of freelancing in 2020.

For now, though, it’s time to wind down for a couple of weeks and recharge. Here’s to 2020 becoming an awesome year for all of us.

Writing tests for RESTful APIs in Python using requests – part 2: data driven tests

Recently, I’ve delivered my first ever three day ‘Python for testers’ training course. One of the topics that was covered in this course is writing tests for RESTful APIs using the Python requests library and the pytest unit testing framework.

In this short series of blog posts, I want to explore the Python requests library and how it can be used for writing tests for RESTful APIs. This is the second blog post in the series, in which we will cover writing data driven tests. The first blog post in the series can be found here.

About data driven testing
Before we get started, let’s quickly review what data driven tests are.

Often, when developing automated tests, you will find yourself wanting to test the same business logic, algorithm, computation or application flow multiple times with different input values and expected output values. Now, technically, you could achieve that by simply copying and pasting an existing test and changing the requires values.

From a maintainability perspective, however, that’s not a good idea. Instead, you might want to consider writing a data driven test: a test that gets its test data from a data source and iterates over the rows (or records) in that data source.

Creating a test data object
Most unit testing frameworks provide support for data driven testing, and pytest is no exception. Before we see how to create a data driven test in Python, let’s create our test data source first. In Python, this can be as easy as creating a list of tuples, where each tuple in the list corresponds to an iteration of the data driven test (a ‘test case’, if you will).

test_data_zip_codes = [
    ("us", "90210", "Beverly Hills"),
    ("ca", "B2A", "North Sydney South Central"),
    ("it", "50123", "Firenze")
]

We’re going to run three iterations of the same test: retrieving location data for a given country code and zip code (the first two elements in each tuple) and then asserting that the corresponding place name returned by the API is equal to the specified expected place name (the third tuple element).

Creating a data driven test in pytest
Now that we have our test data available, let’s see how we can convert an existing test from the first blog post into a data driven test.

@pytest.mark.parametrize("country_code, zip_code, expected_place_name", test_data_zip_codes)
def test_using_test_data_object_get_location_data_check_place_name(country_code, zip_code, expected_place_name):
    response = requests.get(f"http://api.zippopotam.us/{country_code}/{zip_code}")
    response_body = response.json()
    assert response_body["places"][0]["place name"] == expected_place_name

Pytest supports data driven testing through the built-in @pytest.mark.parametrize marker. This marker takes two arguments: the first tells pytest how (i.e., in which order) to map the elements in a tuple from the data source to the arguments of the test method, and the second argument is the test data object itself.

The test methods we have seen in the previous post did not have any arguments, but since we’re feeding test data to our tests from outside, we need to specify three arguments to the test method here: the country code, the zip code and the expected place name. We can then use these arguments in our test method body, the first two as path parameter values in the API call, the last one as the expected result value which is extracted from the JSON response body.

Running the test
When we run our data driven test, we see that even though we only have a single test method, pytest detects and runs three tests. Or better: it runs the same test three times, once for each tuple in the test data object.

Console output for a passing data driven test

This, to me, demonstrates the power of data driven testing. We can run as many iterations as required for a given test, without code duplication, given that we tell pytest where to find the test data. Need an additional test iteration with different test data values? Just add a record to the test data object. Want to update or remove a test case? You know the drill.

Another useful thing about data driven testing using pytest: when one of the test iterations fails, pytest will tell you which one did and what were the corresponding test data values used:

Console output for a failing data driven test

Creating an external data source
In the example above, our test data was still hardcoded into our test code. This might not be your preferred way of working. What if we could specify the test data in an external data source instead, and tell pytest to read it from there?

As an example, let’s create a .csv file that contains the same test data as the test data object we’ve seen earlier:

country_code,zip_code,expected_place_name
us,90210,Beverly Hills
ca,B2A,North Sydney South Central
it,50123,Firenze

To use this test data in our test, we need to write a Python method that reads the data from the file and returns it in a format that’s compatible with the pytest parametrize marker. Python offers solid support for handling .csv files in the built-in csv library:

import csv

def read_test_data_from_csv():
    test_data = []
    with open('test_data/test_data_zip_codes.csv', newline='') as csvfile:
        data = csv.reader(csvfile, delimiter=',')
        next(data)  # skip header row
        for row in data:
            test_data.append(row)
    return test_data

This method opens the .csv file in reading mode, skips the header row, adds all other lines to the list of test data values test_data one by one and returns the test data object.

The test method itself now needs to be updated to not use the hardcoded test data object anymore, but instead use the return value of the method that reads the data from the .csv file:

@pytest.mark.parametrize("country_code, zip_code, expected_place_name", read_test_data_from_csv())
def test_using_csv_get_location_data_check_place_name(country_code, zip_code, expected_place_name):
    response = requests.get(f"http://api.zippopotam.us/{country_code}/{zip_code}")
    response_body = response.json()
    assert response_body["places"][0]["place name"] == expected_place_name

Running this updated test code will show that this approach, too, results in three passing test iterations. Of course, you can use test data sources other than .csv too, such as database query results or XML or JSON files. As long as you’re able to write a method that returns a list of test data value tuples, you should be good to go.

In the next blog post, we’re going to further explore working with JSON and XML in API request and response bodies.

Using the examples for yourself
The code examples I have used in this blog post can be found on my GitHub page. If you download the project and (given you have installed Python properly) run

 pip install -r requirements.txt 

from the root of the python-requests project to install the required libraries, you should be able to run the tests for yourself. See you next time!

Writing tests for RESTful APIs in Python using requests – part 1: basic tests

Recently, I’ve delivered my first ever three day ‘Python for testers’ training course. One of the topics that was covered in this course is writing tests for RESTful APIs using the Python requests library and the pytest unit testing framework.

In this short series of blog posts, I want to explore the Python requests library and how it can be used for writing tests for RESTful APIs. This first blog post is all about getting started and writing our first tests against a sample REST API.

Getting started
To get started, first we need a recent installation of the Python interpreter, which can be downloaded here. We then need to create a new project in our IDE (I use PyCharm, but any decent IDE works) and install the requests library. The easiest way to do this is using pip, the Python package manager:

 pip install -U requests 

Don’t forget to create and activate a virtual environment if you prefer that setup. We’ll also need a unit testing framework to provide us with a test runner, an assertion library and some basic reporting functionality. I prefer pytest, but requests works equally well with other Python unit testing frameworks.

 pip install -U pytest 

Then, all we need to do to get started is to create a new Python file and import the requests library using

import requests

Our API under test
For the examples in this blog post, I’ll be using the Zippopotam.us REST API. This API takes a country code and a zip code and returns location data associated with that country and zip code. For example, a GET request to http://api.zippopotam.us/us/90210

returns an HTTP status code 200 and the following JSON response body:

{
     "post code": "90210",
     "country": "United States",
     "country abbreviation": "US",
     "places": [
         {
             "place name": "Beverly Hills",
             "longitude": "-118.4065",
             "state": "California",
             "state abbreviation": "CA",
             "latitude": "34.0901"
         }
     ]
 }

A first test using requests and pytest
As a first test, let’s use the requests library to invoke the API endpoint above and write an assertion that checks that the HTTP status code equals 200:

def test_get_locations_for_us_90210_check_status_code_equals_200():
     response = requests.get("http://api.zippopotam.us/us/90210")
     assert response.status_code == 200

What’s happening here? In the first line of the test, we call the get() method in the requests library to perform an HTTP GET call to the specified endpoint, and we store the entire response in a variable called response. We then extract the status_code property from the response object and write an assertion, using the pytest assert keyword, that checks that the status code is equal to 200, as expected.

That’s all there is to a first, and admittedly very basic, test against our API. Let’s run this test and see what happens. I prefer to do this from the command line, because that’s also how we will run the tests once they’re part of an automated build pipeline. We can do so by calling pytest and telling it where to look for test files. Using the sample project referenced at the end of this blog post, and assuming we’re in the project root folder, calling

 pytest tests\01_basic_tests.py 

results in the following console output:

Console output showing a passing test.

It looks like our test is passing. Since I never trust a test I haven’t seen fail (and neither should you), let’s change the expected HTTP status code from 200 to 201 and see what happens:

Console output showing a failing test.

That makes our test fail, as you can see. It looks like we’re good to go with this one.

Extending our test suite
Typically, we’ll be interested in things other than the response HTTP status code, too. For example, let’s check if the value of the response content type header correctly identifies that the response body is in JSON format:

def test_get_locations_for_us_90210_check_content_type_equals_json():
     response = requests.get("http://api.zippopotam.us/us/90210")
     assert response.headers['Content-Type'] == "application/json"

In the response object, the headers are available as a dictionary (a list of key-value pairs) headers, which makes extracting the value for a specific header a matter of supplying the right key (the header name) to obtain its value. We can then assert on its value using a pytest assertion and the expected value of ‘application/json‘.

How about checking the value of a response body element? Let’s first check that the response body element country (see the sample JSON response above) is equal to United States:

def test_get_locations_for_us_90210_check_country_equals_united_states():
     response = requests.get("http://api.zippopotam.us/us/90210")
     response_body = response.json()
     assert response_body["country"] == "United States"

The requests library comes with a built-in JSON decoder, which we can use to extract the response body from the response object and turn it into a proper JSON object. It is invoked using the json() method, which will raise a ValueError if there is no response body at all, as well as when the response is not valid JSON.

When we have decoded the response body into a JSON object, we can access elements in the body by referring to their name, in this case country.

To extract and assert on the value of the place name for the first place in the list of places, for example, we can do something similar:

def test_get_locations_for_us_90210_check_city_equals_beverly_hills():
     response = requests.get("http://api.zippopotam.us/us/90210")
     response_body = response.json()
     assert response_body["places"][0]["place name"] == "Beverly Hills"

As a final example, let’s check that the list of places returned by the API contains exactly one entry:

def test_get_locations_for_us_90210_check_one_place_is_returned():
     response = requests.get("http://api.zippopotam.us/us/90210")
     response_body = response.json()
     assert len(response_body["places"]) == 1

This, too, is straightforward after we’ve converted the response body to JSON. The len() method that is built into Python returns the length of a list, in this case the list of items that is the value of the places element in the JSON document returned by the API.

In the next blog post, we’re going to explore creating data driven tests using pytest and requests.

Using the examples for yourself
The code examples I have used in this blog post can be found on my GitHub page. If you download the project and (given you have installed Python properly) run

 pip install -r requirements.txt 

from the root of the python-requests project to install the required libraries, you should be able to run the tests for yourself. See you next time!