Review: Test Automation Day 2016

I like conferences. For me, attending a conference is a great way of breaking out of the daily habit of doing work for clients, for meeting new people and for getting inspired and bringing back home ideas with me to apply in my job. Therefore, I try to attend at least two conferences every year. Also, I’m slowly getting into public speaking, answering the call for papers for conferences that seem like a good fit for a story I like to tell.

Last Thursday (June 23rd) I went to Rotterdam for the Test Automation Day (TAD) for what I believe was the fifth time. It’s one of the major conferences on test automation in the Netherlands, and therefore almost a must visit. If not for the quality of the keynotes and other sessions, then at least to meet old and new friends, colleagues and others working in the test automation field. I responded to the call for papers this year, but unfortunately my proposal was not selected, so I attended the conference as a visitor. I’ll just have to try again next year (more about that at the end…).

I missed the bigger part of the opening keynote by Sally Goble (from The Guardian) due to heavy traffic, so I can’t really comment on that. The next keynote was by Mark Fewster, who talked about test automation health. By this, he means that you shouldn’t just focus on adding more and more automated scripts to your automated test solution, but also keep a close eye on the quality and the performance of your solution itself. This to prevent unnecessary maintenance effort, the possibility of your tests getting flaky and ultimately the demise of your solution itself. A very valuable insight and one that is easily overlooked.

One of the sildes from Mark Fewster's talk on test automation health

One of the sildes from Mark Fewster’s talk on test automation health

After that, it was time for the first break and a chance to talk to some of the other attendees. I then ran into Richard Bradshaw, also (or maybe even better) known as the Friendly Tester. We’d exchanged some tweets in the past and I am a reader of his blog, but I never met him in person. He was there to deliver a talk later that day, and we had a short talk over coffee. It’s always nice to meet people you’ve only known from the web in person.

After that, it was time for the first round of breakout sessions. I opted to go to what I thought was a workshop on agile test automation, hosted by the chairwoman of this year’s TAD, Linda Hayes. I selected this sessions since I like attending workshops much more than presentations that merely broadcast information. Unfortunately, it turned out I misread the programme, as this was a masterclass rather than a workshop, so there were no interactive moments during the session. Also, I was somewhat disappointed by the contents of the presentation itself, as I felt that the concepts Linda explained weren’t new, at least not to me. Still, I got a couple of small nuggets of wisdom from this session as well, some of which I might also be able to apply in my current project, so it wasn’t all bad.

After that: lunchtime! I spent most of the lunch break talking to people I hadn’t seen in a while (in some years more than five years) so that flew by. After lunch, another keynote was delivered by Marielle Stoelinga from Twente University (which I graduated from as well, coincidentally). She talked about applying model based testing to probabilistic software, something quite different from my daily practice of applying tools to test web applications and/or distributed backend systems. Since I still have a bit of interest in the mathematical background of computer science in general and software testing in particular left behind from when I was a student myself, this to me was an interesting subject.

Then, another round of breakout sessions. I first went to listen to Richard Bradshaw, who delivered one of the better talks of the day in my opinion. He’s a great presenter and managed to make his session on expanding the view on applying tools in testing (he talks about ‘automation in testing’, not about ‘test automation’) interactive as well. Very refreshing and something more people should do!

The second breakout session I attended was a short workshop on service virtualization (the other field I’m particularly interested in, next to test automation) using a web-based tool called StubUI. This workshop was delivered by Maarten Metselaar and Derk-Jan de Grood and introduced the concept and benefits of service virtualization. There was far too little time to go into any real depth, but I hadn’t worked with StubUI before, so I learned something new in this session as well.

The closing keynote for the day was delivered by Alexandre Petrenko. He talked about dealing with faults and uncertainty in software in a systematic way. This talk wasn’t one of the highlights of the day unfortunately. The concept – including an especially good bit about applying mutation testing to software behaviour models – was very interesting (although fairly academic), but the delivery of the talk wasn’t really on par with what I think a good closing keynote should be. It was all a bit dry, to be honest.

A slide from Alexandre Petrenko's talk. on mutation testing applied to behaviour models

A slide from Alexandre Petrenko’s talk. on mutation testing applied to behaviour models

Of course, when things get a little dry, there’s only one solution: after-conference drinks! After all talks finished and the conference was essentially over, I spent some more time talking to people, ice cold drink in hand (it was HOT that day). Looking back on the day, I wasn’t all that impressed by the quality of most talks, but meeting and talking to other attendees and some of the sponsors still made it a day well spent. With regards to the quality of the talks: there’s only one solution and that’s to put my money where my mouth is, so I’ll definitely try and get in as a speaker again next year!

Selenium and what it is not

Please note: this post is in no way meant as a rant towards some of my fellow testers. I’ve only written this to try and clear up some of the misconceptions with regards to Selenium WebDriver that I see popping up on various message boards and social media channels.

Selenium is hot in the test automation world. This probably isn’t news to you. If it is, you’re welcome, please remember you read it here first! In all seriousness, Selenium has been widely used for the past couple of years, and I don’t see its popularity fading any time soon. Due to its popularity, Selenium has been the go-to solution for everybody looking to write automated checks for browser-based applications, even though sometimes it might not be the actual best answer to the question at hand. In the past couple of years I have seen a lot of funny, strange and downright stupid questions being asked with regards to Selenium.

In this post, I would like to look at Selenium from a slightly different perspective by answering the question ‘what is Selenium NOT?’. All of the examples in the remainder of this blog post are based on actual questions or anecdotes I have read. I’m not going to link to them, since this post is not about blaming and shaming, it’s about educating those in need of some education.

Selenium is not a test tool
You read that right, Selenium is not a test tool. A lot of people (including a staggeringly large percentage of recruiters) get this wrong, unfortunately. To quote the Selenium web site:

Selenium automates browsers. That’s it! What you do with that power is entirely up to you.

My personal number one reason why Selenium is not a test tool: it does not provide a mechanism to perform assertions. For that, you’ll have to combine Selenium with JUnit or TestNG (for Java), NUnit (for C#) or any other actual (unit) testing framework. Selenium only performs predefined actions on a web site or other application running in a browser. It does this fairly well, but that doesn’t make it a test tool.

No, Selenium is NOT a test tool

No, Selenium is NOT a test tool

Off-topic: the same applies to tools that support Behaviour Driven Development, such as Cucumber and SpecFlow. Those aren’t test tools either. They can be used as an assistant in writing automated tests (checks), but that isn’t the same thing.

Selenium is not a tool to be used in API testing
I’ve seen this one come by a number of times, even as recently as a few days ago:

How can I perform tests on APIs using Selenium?

Again, Selenium automates browsers, so it operates on the user interface level of an application. Actions performed on a user interface might invoke API calls. Selenium is completely oblivious to this API interaction, however. There’s no way to have Selenium interact with APIs directly. If you want to perform automated checks on the API level, please use a tool that is specifically created for these types of checks, such as REST Assured. It might be wise to repeatedly ask yourself ‘Am I actually testing the user interface, or am I merely testing my application THROUGH that user interface?‘.

Selenium is not a performance test tool
Seeing how easy it is to write tests using Selenium (I’m not saying it’s easy to write GOOD tests, though), a lot of people seem to think that these tests can easily be leveraged to execute load and performance tests as well, for example using Selenium Grid. This is a pretty bad idea, though. Selenium Grid is a means to execute your functional tests in parallel, thereby shortening the execution time. It is not meant to be (ab)used as a performance testing platform, if only for these two simple reasons:

  • It doesn’t scale very well, so you’ll probably have a hard time generating anything but trivial loads
  • Selenium does not offer a mechanism to measure response times (at least not without taking into account the time needed at the client side to render a page or specific elements), and Selenium Grid isn’t able to gather and aggregate these response times for each individual note and present them in a concise and useful manner.

Both of these are essential if you’re serious about your performance testing, so please use a tool that is specifically written for that purpose, such as JMeter.

Selenium can’t handle your desktop applications
I feel that I’m starting to repeat myself here: Selenium automates browsers. Anything outside the scope of that browser cannot be handled by Selenium. This means that Selenium also can’t handle alerts and dialogs that are native to the operating system, such as the Windows file upload/download dialogs. To handle these, either use a tool such as AutoIt (if you’re on Windows) or, and this approach is much preferred, bypass the user interface altogether and use a solution such as the one presented in this blog post.

I sincerely hope this clears up some of the misconceptions around Selenium. If you have any other examples, feel free to post a comment or send me an email.

Open sourcing my workshop: an experiment

This is an experiment I’ve been thinking about for some time now. I have absolutely no idea how it will turn out, but hey, there’s only one way to find out!

What is an open source workshop?
I have recently created a workshop and delivered this with reasonable success to an audience of Dutch testers. I have received and processed the feedback I got that day, and now it’s time for the next step: I’m making it open source. This means that I’m giving everybody full access to the slides, the exercises and their solutions and the notes I have used in the workshop. I’m hoping this leads to:

  • Feedback on how to further improve the contents and the presentation of the workshop
  • More people delivering this workshop and spreading the knowledge it contains

What is the workshop about and who is it for?
The original version of this workshop is an introduction to RESTful web services and how to write tests for these web services using REST Assured. It is intended for testers with some basic Java programming skills and an interest in web service and API testing. However, as you start to modify the workshop to your personal preferences, both the contents and the target audience may of course change.

A simplified outline of the workshop looks like this:

  1. An introduction to RESTful web services and their use in modern IT applications
  2. An introduction to REST Assured as a tool to write tests for these services
  3. Setup of REST Assured and where to find documentation
  4. Introduction of the application under test
  5. Basic REST Assured features, Hamcrest and GPath
  6. Parameterization of tests
  7. Writing tests for secure web services
  8. Reuse: sharing variables between tests

The workshop comes with a couple of small (this workshop was originally designed to be delivered in 3 hours) sets of exercises and their solutions.

Prerequisites and installation instructions
As stated before, the workshop participants should have an interest in test automation, some basic Java (or any other object oriented programming language) knowledge as well as a grasp of the contents of web services.

The exercises are contained within a Maven project. As REST Assured is a Java library, in order to get things working you need a recent JDK and an IDE with support for Maven. Since TestNG is used as a test runner in the examples, your IDE should support this as well (see the TestNG web site for instructions), but feel free to rewrite the exercises to use your test framework of choice. I just happen to like TestNG.

So where can I find all these goodies?
That’s an easy one: on my GitHub repo.

How can I contribute?
I hope that by now you’re at least curious and maybe even enthusiastic about trying out this workshop and maybe even thinking of contributing to it. You can do so in various ways:

  • By giving me constructive feedback on either the concept of an open source workshop, the contents of this particular workshop, or both.
  • By spreading the word. I’d love for you to let your friends, colleagues or anyone else in your network know that this workshop exists. The more feedback I get, the better this workshop becomes.
  • By actually delivering the workshop yourself and letting me know how it turned out. That’s a great form of feedback!

Interesting, but I have some questions…
Sure! Just submit a comment to this post or send me an email at bas@ontestautomation.com. Especially if you’re planning to deliver the workshop, but feel you are stuck in some way. Also, please don’t forget to share the raving reviews you got (or stories on why and where you got booed off the stage..)! Remember that this is an experiment for me too, and if it turns out to be a successful one I definitely will create and publish more of these workshops in the future.

And on a final note, if you’re interested in having this workshop delivered at your organization, but you don’t feel comfortable doing it yourself, please let me know as well. I am happy to discuss the options available and work something out.