Review: Automation Guild 2017

About half a year ago, in July of 2016 to be exact, I was invited by Joe from the well-known TestTalks podcast to contribute to a new initiative he had come up with: the Automation Guild conference. Joe was looking to organize an online conference fully dedicated to test automation, and he asked me if I wanted to host a session on testing RESTful APIs with REST Assured. Even though I’d never done anything like this before -or maybe because I’d never done anything like this before- I immediately said yes. Only later realizing what it was, exactly, that I had agreed to do..

Since the conference was online and Joe was looking for the best possible experience for the Automation Guild delegates, he asked each of the speakers to record a video session in advance, including sharing of screens and writing and executing code (where relevant, of course). This being an international conference of course also meant speaking in English, which made it all the more challenging for me personally. I’m fine with speaking in English, but the experience of recording that, listening to it and editing all the ‘ermm..’s and ‘uuuhh’s out was something entirely new, and not exclusively pleasant either! It also took me a lot longer than expected, but in the end, I was fairly happy with the result. And I learned a lot in the process, from English pronunciation to video editing, so it was definitely not all bad!

Enough about that, back to the conference. It was held last week, January 9th to 13th, with around 5 sessions every day plus a couple of keynotes. The actual videos were released beforehand so all attendees could watch them when it best suited their schedule, while on the conference days there were live Q&A sessions with all of the speakers to create a live and interactive atmosphere. Having never participated in anything similar, and even though I caught only a couple of sessions due to other obligations (the time zone difference didn’t help either) I think this worked remarkably well.

My own Q&A session flew by too, with a lot of questions ranging from the fairly straightforward to the pretty complex. These questions did not just cover the contents of my session, but also API testing in general and there were some questions about service virtualization as well, which made it an even more interesting half hour.

I liked this interactive Q&A part of my talk and of the conference as a whole a lot, since getting good questions meant that the stuff I talked about hit home with the listeners. I’ve had conference talks before where the audience was suspiciously quiet afterwards, and that’s neither a good thing nor an agreeable experience, I can tell you. But in this case, there were a lot of questions, and we didn’t even get to all of them during the Q&A. If all goes well, I should receive them later on and get to interact with a couple more listeners in that way. But even so far, I had an amazing time talking to Joe and (indirectly) to the attendees and answering their questions in the best way I could.

As for the other speakers, Joe managed to create a world-class lineup of speakers, and I’m quite proud to have been a part of the speaker list. I never thought I’d be in a conference lineup together with John Sonmez, Alan Richardson, Seb Rose, Matt Wynne and so many other recognized names in the testing and test automation field. So far, I only managed to watch a couple of the other speakers’ sessions, but luckily, all of them are available for a year after the end of the conference, so I’ll definitely watch more of them when time permits in a couple of weeks.

I can only speak for myself, but I think that the inaugural edition of Automation Guild was a big success, given such an incredible lineup and over 750 registered attendees. This is mostly due to the massive amount of effort Joe has put into setting this up. I can’t even begin to imagine how much time it must have cost him. Having said that, I am already looking forward to the second edition next year. If not as a second-time contributor, then surely as an attendee! If you missed or couldn’t make the conference, then mark your agenda for next year, because surely you don’t want to miss it again!

Choose wisely

In a recent article that was published on TechBeacon, I argued that writing tests at the API level often hits the sweet spot between speed of execution, stability (both in terms of execution and maintenance required) and test coverage. What I didn’t write about in this article is what motivated me to write the article in the first place, so I thought it might be a good idea to dedicate a blog post to the reason behind the piece.

There really was only a single reason for me to suggest the topic to the people at TechBeacon: because I see things go wrong too often when people start creating automated tests. I’m currently working with a number of clients in two separate projects, and what a lot of them seem to have in common is that they revert instantly to end-to-end tests (often using a tool like Selenium or Protractor) to create the checks that go beyond the scope of unit tests.

As an example, I’m working on a project where we are going to create automated checks for a web shop that sells electronic cigarettes and related accessories in the United States. There are several product categories involved, several customer age groups to be considered (some products can be purchased if you’re over 18, some over 21, some are fit for all ages, etc.), and, this being the US, fifty different states, each with their own rules and regulations. In short, there’s a massive amount of possible combinations (I didn’t do the math yet, but it’s easily in the hundreds). Also, due to the strict US regulations, and more importantly the fines associated with violating these rules, they want all relevant combinations included in the automated test.

Fair enough, but the problem started when they suggested we write an automated end-to-end test case for each of the possible combinations. That means creating an order for every combination of product group, age group and state, and every order involves filling out three or four separate forms and some additional more straightforward web page navigation. In other words, this would result in a test that would be slow to execute (we’re talking hours here) and possibly quite hard to maintain as well.

Instead, I used Fiddler to analyze what exactly it was that the web application did in order to determine if a customer could order a given product. Lo and behold.. it simply called an API that exposed the business logic used to make this decision. So, instead of creating hundreds of UI-driven test cases, I suggested to create API-level tests that would verify the business logic configuration, and add a couple of end-to-end tests to verify that a user can indeed place an order successfully, as well as receive an error message in case he or she tries to order a product that’s not available for a specific reason.

We’re still working on this, but I think this case illustrates my point fairly well: it often pays off big time to look beyond the user interface when you’re creating automated tests for web applications:

  • Only use end-to-end tests to verify whether a user of your web application can perform certain sequences of actions (such as ordering and paying for a product in your web shop).
  • See (ask!) whether business logic hidden behind the user interface can be accessed, and therefore tested, at a lower (API or unit) level, thereby increasing both stability and speed of execution.

For those of you familiar with the test automation pyramid, this might sound an awful lot like a stock example of the application of this model. And it is. However, in light of a couple of recent blog posts I read (this one from John Ferguson Smart being a prime example) I think it might not be such a good idea to relate everything to this pyramid anymore. Instead, I agree that what it comes down to (as John says) is to get clear WHAT it is that you’re trying to test and then write tests on the right level. If that leads to an ice cream cone, so be it. If only because I like ice cream..

This slightly off-topic remark about the test automation pyramid notwithstanding, I think the above case illustrates the key point I’m trying to get across fairly well. As I’ve said before, creating the most effective automated tests comes down to:

  • First, determining why you want to automate those tests in the first place. Although that’s not really the subject of this blog post, it IS the first question that should be asked. In the example in this post, the why is simple: because the risk and impact of fines imposed in case of the sale of items to groups of people that should not be allowed to is high enough to warrant thorough testing.
  • Then, deciding what to test. In this case, it’s the business logic that determines whether or not a customer is allowed to purchase a given item, based on state of residence, product ID and date of birth.
  • Finally, we get to the topic of this blog post, the question of how to test a specific application or component. In this case, the business logic that’s the subject of our tests is exposed at the API level, so it makes sense to write tests at that level too. I for one don’t feel like writing scenarios for hundreds of UI-level tests, let alone run, monitor and maintain them..

I’m sure there are a lot of situations in your own daily work where reconsidering the approach taken in your automated tests might prove to be beneficial. It doesn’t have to be a shift from UI to API (although that’s the situation I most often encounter), it could also be writing unit tests instead of end-to-end user interface-driven tests. Or maybe in some cases replacing a large number of irrelevant unit tests with a smaller number of more powerful API-level integration tests. Again, as John explained in his LinkedIn post, you’re not required to end up with an actual pyramid, as long as you end up with what’s right for your situation. That could be a pyramid. But it could also not be a pyramid. Choose (and automate) wisely.

Why I don’t call myself a tester, or: defining what I do

Warning: rambling ahead!

Recently, I’ve been forced to think about roles in my profession, and what role it is exactly that I fulfill when at work. ‘Forced’ sounds way more negative than it is, by the way.. In fact, I think it has been a good thing, because it has helped me to think and better define what it is I actually do… So, thanks to Richard:

and Maaret (she’s quoting this blog post, by the way):

for making me think about the restrictive activity of pinning roles and labels on people. Nobody conforms 100% to the definition of a given role, should such a definition ever be created and agreed upon by everybody (good luck with that!). On the other hand, being able to explain what I do and what value I can add to a person, a team or an organization is something I can’t really do without, especially as a freelance contractor. And that, sadly, involves roles and labels, because that’s mainly how the keyword-driven IT freelancing and contracting business works.

So far, I’ve mainly defined what I do by referring to what I’m not, something you can see in the tweets I referred to above:

  • I am not a developer, at least not when compared to what the majority of people think of when they think of a developer. For someone involved in test automation (which is a form of software development), I write a shockingly low amount of code. I can’t even remember exactly when the last time was I wrote code that was used in actual automated test execution. Somewhere at the beginning of last year, I think…
  • I am not a tester. The last time I evaluated and experimented with an application in order to assess its quality and usefulness has been years ago. I think it involved TMap test design techniques and rather large Excel sheets…
  • I don’t fit into an Agile team. This is the most recent realization I’ve made. A lot of projects that come my way ask me to be a member of an Agile team, contributing to testing, test automation and possibly some other tasks. That just doesn’t fit with what I like to do or how I like to work. For starters, I think it’s kind of hard to be a member of an Agile team when you’re only there two days a week. You just miss too much. But I like to work on multiple projects at the same time and also have some time left for training, business development and other fun stuff. Unfortunately, the freelance project market here largely consists of projects for 32-40 hours a week, with Agile being a very popular way of working..

On the other hand:

  • I work a lot with developers, helping them to realize WHY they need to spend time creating automated tests and WHAT approach might work best. I gladly leave the HOW to them, since they’ll run circles around me with their development skills. This requires me to stay sort of up to date with tools, approaches and practices in software development, even though I don’t write code (often).
  • I work a lot with testers, helping them to think properly about what test automation can do for them and how to ‘sell’ it to other stakeholders. This requires me to stay up to date with how testers think and work, even though I don’t test myself.
  • I work a lot with Agile teams, helping them to create and especially test software more efficiently through smart application of tools. This requires me to stay up to date with team dynamics, Agile practices and trends, even though I haven’t been contributing to daily standups and sprint planning sessions for a while.

But I still have a hard time defining exactly what it IS that I do! My LinkedIn tagline says this:

Test automation | Service virtualization | Consultant | Trainer | Writer | Speaker

Those probably come closest to what I do. Most importantly, it states in what fields I am active (test automation and service virtualization). What other people call consulting makes up for most of my time spent working at the moment (I’d like that to change a little, but that’s a different story). But I don’t like the word ‘consultant’ or ‘consulting’. It makes me think too much of people with big mouths, expensive suits and six months of actual experience. And I don’t wear expensive suits. Or suits in general.

The rest of my time is spent training a little (hopefully more in the near future), writing a little (same goes for this) and speaking a little (and for this too). Yet, I don’t consider myself a trainer, a writer or a speaker. But maybe I am all of them. As well as a developer. And a tester. I don’t know.

Long story short, what I think it boils down to is that I help people, teams and organizations perform better in the areas of test automation and service virtualization. Areas that are of course not to be considered as goals of their own, but rather as servant to the larger scale goals of more effective testing, software development and doing business in general.

I think the key word here is ‘help’. I like to help. It isn’t about me. It shouldn’t be. If it looks like it is about me, please say so, because I’d like to avoid that by all means possible. As a start, next week I’ll talk test automation or service virtualization again.

Oh, and of course happy 2017 to all of you! I’m very much looking forward to helping people even more this year.