On quitting Twitter and looking forward

Many of you probably have not noticed, but last weekend I deactivated my Twitter account. Here’s why.

I’ve been active on Twitter for around four years. In that time, it has proven to be a valuable tool for keeping up with industry trends, staying in touch with people I know from conferences or other events, as well as a few non-testing and IT related sources of information (mainly related to running or music).

My 'I'm leaving Twitter' announcement

Over time, though, the value I got from Twitter was slowly getting undone by two things in particular:

  1. The urge to check my feed and notifications far too often.
  2. The amount of negativity and bickering going on.

I started to notice that because of these two reasons, I became ever more restless, distracted and downright anxious, and while Twitter may not have been the only reason for that, it was definitely a large contributor.

I couldn’t write, code or create courseware for more than 10 minutes without checking my feed. My brain was often too fried in the evening to undertake anything apart from mindless TV watching, playing the odd mobile game or even more social media consumption. Not a state I want to be in, and definitely not an example I want to set for my children.

So, at first, I decided to take a Twitter break. I removed the app from my phone, I blocked access to the site on my laptop and activated Screen Time on my phone to make accessing the mobile Twitter site more of a hassle. And it worked. Up to a point.

My mind became more clear, I became less anxious. But it still felt like it wasn’t enough. There was still the anxiety, and I still kept taking the extra steps needed to check my Twitter feed on my phone. And that’s when I thought long and hard about what value I was getting from being on Twitter in the first place. After a while, I came to the realization that it simply was too little to warrant the restlessness, and that the only reasonable thing to do was to pull the plug on my account.

So that’s what I did. And it feels great.

I’m sure I’ll still stay in touch with most people in the field. Business-wise, LinkedIn was and is a much more important source of leads and gigs anyway. There are myriad other ways to keep in touch with new developments in test automation (blogs, podcasts, …). And yes, I may hear about some things a little later than I would have through Twitter. And I even may not hear about some of them altogether. But still, leaving Twitter so far turns out to be a major net positive.

I’ve got some big projects coming up in the next year or so, and I’m sure I’ll be able to do more and better work without the constant distraction and anxiety that Twitter gave me in recent times.

So, what’s up in the future? Lots! First of all, more training: this month is my first ever month when I hit my revenue target purely through in company training, and I hope there are many more to come. I’ve got a couple of conference gigs coming up as well, most notably at this point my keynote and workshop at the Agile & Automation Days in Gdańsk, Poland, as well as a couple of local speaking and workshop gigs. And I’m negotiating another big project as well, one that I hope to share more information about in a couple of months time.

Oh, and since I’m not getting any younger, I’ve set myself a mildly ambitious running-related goal as well, and I’m sure that the added headspace will contribute to me keeping focused and determined to achieve it. I’ll gladly take not being able to brag about it on Twitter in case I make it.

One immediate benefit of me not being on Twitter anymore is the fact that I seem to be able to read more. Just yesterday, I finished ‘Digital Minimalism‘ by Cal Newport, and while this book hasn’t been the reason for my account deactivation, it surely was the right read at the right moment!

Writing API tests in Python with Tavern

So far, most of the blog posts I’ve written that covered specific tools were focused on either Java or C#. Recently, though, I got a request for test automation training for a group of data science engineers, with the explicit requirement to use Python-based tools for the examples and exercises.

Since then, I’ve been slowly expanding my reading and learning to also include the Python ecosystem, and I’ve also included a couple of Python-based test automation courses in my training offerings. So far, I’m pretty impressed. There are plenty of powerful test tools available for Python, and in this post, I’d like to take a closer look at one of them, Tavern.

Tavern is an API testing framework running on top of pytest, one of the most popular Python unit testing frameworks. It offers a range of features to write and run API tests, and if there’s something you can’t do with Tavern, it claims to be easily extensible through Python or pytest hooks and features. I can’t vouch for its extensibility yet, thought, since all that I’ve been doing with Tavern so far was possible out of the box. Tavern has good documentation too, which is also nice.

Installing Tavern on your machine is easiest when done through pip, the Python package installer and manager using the command

pip install -U tavern

Tests in Tavern are written in YAML files. Now, you either love it or hate it, but it works. To get started, let’s write a test that retrieves location data for the US zip code 90210 from the Zippopotam.us API and checks whether the response HTTP status code is equal to 200. This is what that looks like in Tavern:

test_name: Get location for US zip code 90210 and check response status code

stages:
  - name: Check that HTTP status code equals 200
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      status_code: 200

As I said, Tavern runs on top of pytest. So, to run this test, we need to invoke pytest and tell it that the tests we want to run are in the YAML file we created:

As you can see, the test passes.

Another thing you might be interested in is checking values for specific response headers. Let’s check that the response content type is equal to ‘application/json’, telling the API consumer that they need to interpret the response as JSON:

test_name: Get location for US zip code 90210 and check response content type

stages:
  - name: Check that content type equals application/json
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      headers:
        content-type: application/json

Of course, you can also perform checks on the response body. Here’s an example that checks that the place name associated with the aforementioned US zip code 90210 is equal to ‘Beverly Hills’:

test_name: Get location for US zip code 90210 and check response body content

stages:
  - name: Check that place name equals Beverly Hills
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      body:
        places:
          - place name: Beverly Hills

Since APIs are all about data, you might want to repeat the same test more than once, but with different values for input parameters and expected outputs (i.e., do data driven testing). Tavern supports this too by exposing the pytest parametrize marker:

test_name: Check place name for multiple combinations of country code and zip code

marks:
  - parametrize:
      key:
        - country_code
        - zip_code
        - place_name
      vals:
        - [us, 12345, Schenectady]
        - [ca, B2A, North Sydney South Central]
        - [nl, 3825, Vathorst]

stages:
  - name: Verify place name in response body
    request:
      url: http://api.zippopotam.us/{country_code}/{zip_code}
      method: GET
    response:
      body:
        places:
          - place name: "{place_name}"

Even though we specified only a single test with a single stage, because we used the parametrize marker and supplied the test with three test data records, pytest effectively runs three tests (similar to what @DataProvider does in TestNG for Java, for example):

Tavern output for our data driven test, run from within PyCharm

So far, we have only performed GET operations to retrieve data from an API provider, so we did not need to specify any request body contents. When, as an API consumer, you want to send data to an API provider, for example when you perform a POST or a PUT operation, you can do that like this using Tavern:

test_name: Check response status code for a very simple addition API

stages:
  - name: Verify that status code equals 200 when two integers are specified
    request:
      url: http://localhost:5000/add
      json:
        first_number: 5
        second_number: 6
      method: POST
    response:
      status_code: 200

This test will POST a JSON document

{'first_number': 5, 'second_number': 6}

to the API provider running on localhost port 5000. Please note that for obvious reasons this test will fail when you run it yourself, unless you built an API or a mock that behaves in a way that makes the test pass (great exercise, different subject …).

So, that’s it for a quick introduction to Tavern. I quite like the tool for its straightforwardness. What I’m still wondering is whether working with YAML will lead to maintainability and readability issues when you’re working with larger test suites and larger request or response bodies. I’ll keep working with Tavern in my training courses for now, so a follow-up blog post might see the light of day in a while!

All examples can be found on this GitHub page.

Why test automation is a lot like bubble wrap

So, a couple of weeks ago I had the pleasure of delivering a keynote at the 2019 UKStar conference in London, where I talked about how asking better questions (one tip: ask why first, ask how later) can help teams and organizations prevent ‘automation for automation’s sake’ and increase their chances of test automation actually becoming a valuable asset in the overall software development and testing life cycle.

In this talk, I used an analogy comparing test automation to bubble wrap, in an attempt to help people see test automation in a slightly different light than the ‘be all end all’ solution to all testing problems that it’s still perceived as too often. This analogy sparked a couple of mentions and questions for clarification on Twitter afterwards, so I thought it would be a good idea to repeat and expand on it in this blog post.

Street wrapped in bubble wrap

So, why do I think that test automation is similar to bubble wrap?

It has little value on its own
You might not tell this from the incredible amounts of time and money that organizations spend on test automation, but in itself, automated test scripts have very little value. Just like buying a roll of bubble wrap doesn’t set you back a whole lot of money (I’ve found a roll 1 meter wide and 100 meters long for under 40 euros), nobody’s going to wake up in the morning with the plan of spending a lot of money to buy automated tests. But why are organizations still investing so much in it then? That’s because…

It’s used to ship another product of much higher value safely to its destination
The value of both bubble wrap and test automation is instead in what they provide (when applied well, of course): safety. Just like inexpensive bubble wrap can be used to ship expensive products (china vases, for example) safely to the other side of the world, the main purpose of test automation is to enable teams to ship a software product that does provide value to its destination (or at least bring it a step closer): production.

There’s often too much of it in the package
I don’t know about you, but I order a lot of my shopping online, and all too often, the delivery person presents me with a large box that’s filled more than half with bubble wrap (or those fancy air-filled bags). Similarly, software development teams still too often spend a huge amount of time on writing lots of test automation. Why? Because all those green check marks give them a feeling of safety. Everybody feels good when you tell them that you’ve added 25 automated tests to the suite. Far fewer people, however, make a habit of checking if those tests actually serve a purpose…

It doesn’t protect your product against everything that could go wrong with it
Bubble wrap might protect your product from breaking when it falls. However, it doesn’t protect you against theft, or your package getting lost in the mail. Similarly, test automation doesn’t protect your software against all types of risks. It might protect you against some risks.

I cannot make this point without referring you to the example that Alex Schladebeck gave in a recent TestTalks podcast episode:

Outtake from the interview with Alex Schladebeck on TestTalks

I’m referring to the same principle here, although Alex put it much more eloquently than I do.

Oh, and finally…

It’s a lot of fun to play with!
No further comment necessary 🙂