Are your test automation efforts holding you back?

In case you either have lived under a rock when it comes to testing trends (I don’t say you did), you haven’t paid any attention to what I’ve written in the past (this is much more likely), or possibly both, it seems that everybody in the testing world sort of agrees that automated testing can be highly beneficial or even essential to software development project success. Especially with ever shortening release cycles and less and less time per cycle available for testing.

But why are so many hours spent on creating automated tests nothing more than a waste of time? In this post, I would like to sum up a couple of reasons that cause automated test efforts to go bad. These are reasons I’ve seen in practice, that I had to work with and improve upon. And yes, I have certainly been guilty of doing some of the things I’ll mention in this post myself. That’s how I learned – the hard way.

Your flaky tests are sending mixed signals
One of the most prominent causes of time wasted with test automation is what I like to call ‘flaky test syndrome’. These are tests that pass some of the time and fail at other times. As a result, your test results cannot be trusted upon no matter what, which effectively renders your tests useless. There is a multitude of possible reasons that can cause your tests to become flaky, the most important being timing or synchronization issues. For example: a result is not stored in the database yet when a test step tries to check it, or an object is not yet present on a web page when your test tries to access it. There are a lot of possible solutions to these problems, and I won’t go into detail on every one of them, but what is paramount to test automation success is that your test results should be 100% trustworthy. That is, if a test fails, it should be due to a defect in your AUT (or possibly to an unforeseen test environment issue). Not due to you using Thread.sleep() throughout your tests.

fail

Your test results take too long to analyze
There is no use in having an automated test suite that is running all smoothly when your test results can’t be interpreted in a glance. If you can’t see in five seconds whether a test run is successful, you might want to update your reporting strategy. Also, if you can’t determine at least the general direction to the source of an error from your reporting in a couple of seconds, you might want to update your reporting strategy. If your general response to a failure appearing in your test reports is to rerun the test and start debugging or analyzing from there, you might want to update your reporting strategy.

You put too much trust put in your automated tests and test results
Even though I just stated that you should do everything within your power to make your automated tests as trustworthy as possible, this does by no means imply that you should trust on your automated tests alone when it comes to product quality. Automated tests (or better, checks) can only do so much in verifying whether your application meets all customer demands. So, for all of you having to deal with managers telling you to automate all of the tests, please, please do say NO.

Please don't do this!

You test everything through the user interface
I don’t think I have to tell this again, especially not to those of you that are regular readers of my blog, but I’m fairly skeptical when it comes to GUI-driven test automation. If anything, it makes your test runs take longer and it will probably result in more maintenance efforts as a result of any UI redesign. Setting up a proper UI-based automation framework also tends to take longer, so unless you’re really testing the user interface (which is not that often the case), please try and avoid using the UI to execute your tests as much as possible. It might take a little longer to figure out how to perform a test, but it will be much more efficient in the longer run.

Hopefully these points will trigger you to take a critical look at what you’re doing at any point in your test automation efforts. What also might help is to ask yourself what I think is the most important question when it comes to test automation (and to work in general, by the way):

“Am I being effective, or am I just being busy?”

Let’s make test automation better.

On designing a self-diagnosing and self-healing automated test framework – part 1

This is the first post in a short series on designing robust, self-diagnosing and, if possible, even self-healing tests. You can read all articles in this series by clicking here.

I like automating tests. So much that I have even made my profession out of it. What I don’t like though is wasting a lot of time trying to find out why an automated test failed and whether the failure was a genuine bug, a flaky test or some sort of test environment issue.

A quick Google search showed me that there are a couple of commercial test tools that claim they come with some sort of self diagnosing and even a self healing mechanism, for example this one. I’m very interested to see how such a mechanism works. Skeptic as I am, I don’t really believe it until I’ve seen it. Nonetheless, I think it would be interesting to see what can be done to make existing automated test frameworks more robust by improving their ability to self-diagnose and possible even self-heal in case an unexpected error occurs. So that’s what I’ll try to do in the next couple of posts.

The step by step approach I want to follow and the considerations to be made along the way are (very) loosely inspired on a scientific article titled ‘A Self-Healing Approach for Object-Oriented Applications’, which can be downloaded from here. This article presents an approach and architecture for fault diagnosis and self-healing of interpreted object-oriented applications. As the development and maintenance of automated tests should be treated as any other software development project, I see no reason why the principles and suggestions presented in the article could not apply to a test automation framework, at least to some extent…

When is a system self-diagnosing and self-healing?
Let’s start with the end in mind, as Stephen Covey so eloquently put it. What are we aiming for? In order for a system to be self-diagnosing, it should:

  1. Be self-aware, or more concretely, always have a deterministic state
  2. Recognize the fact that an error has occurred
  3. Have enough knowledge to stabilize itself
  4. Be able to analyze the problem situation
  5. Make a plan to heal itself
  6. Suggest healing solutions to the system administrator (in this case, the person responsible for test automation)

If we want our system not only to be self-diagnosing but also self-healing, the system should also:

  1. Heal itself without human intervention

In this post – and probably some future posts as well – I will try and see whether it is possible to design a generic approach for creating robust, self-diagnosing and self-healing test automation frameworks. I’ll try and include meaningful examples based on a relatively simple Selenium test framework wherever possible.

Self-aware tests
The most straightforward way to make any piece of software self-aware is to introduce the concept of state. A robust program, and therefore an automated test as well, should always be in a specific state when it is executing. For robustness, we will assume that the state model for an automated test is deterministic, i.e., a test can never be in more than one state, and an event that triggers a state transition should always result in the test ending up in a single new state. Let’s say we identify the following states that an automated test can be in:

  • Not running
  • Initialization
  • Running
  • Error
  • Teardown

The state model or state transition diagram could then look like this:

A first state model for our automated test

A sample implementation of this state model (also known as a finite state machine or FSM) can be created using a Java enum:

public enum State {
	
	NOT_RUNNING {
        @Override
        State doTransition(String input) {
            System.out.println("Going from State.NOT_RUNNING to State.INITIALIZATION");
            return INITIALIZATION;
        }
    },
    INITIALIZATION {
        @Override
        State doTransition(String input) {
        	if (input.equals("error")) {
        		System.out.println("Going from State.INITIALIZATION to State.ERROR");
        		return ERROR;
        	} else {
        		System.out.println("Going from State.INITIALIZATION to State.RUNNING");
        		return RUNNING;
        	}
        }
    },
    // The RUNNING and TEARDOWN states are implemented in the same way as INITIALIZATION state
    ERROR {
        @Override
        State doTransition(String input) {
        	if (input.equals("ok")) {
        		System.out.println("Going from State.ERROR to State.NOT_RUNNING");
        		return NOT_RUNNING;
        	} else {
        		System.out.println("Remaining in State.ERROR");
        		return this;
        	}
        }
    };
 
    abstract State doTransition(String input);
}

In a next post, I will show how we can apply this state model to a simple Selenium WebDriver test to make it more robust. I will also demonstrate how this state model helps us in letting our tests fail gracefully and in determining what exactly constitutes an error (versus a failed check, for example).

Book: Alan Page – The A word

When not working on testing and test automation myself, I like to read books, articles and blog posts that are related to my profession. One book I have recently finished reading is The A Word by Alan Page. Although it’s more of a collection of revised blog posts from his blog The Angry Weasel than a book, it’s been a very interesting read nonetheless.

For those of you that aren’t familiar with his writing, Alan writes a lot about his experiences with testing and test automation. He is pretty well known for his skeptical view on how a lot of people see automated testing as the solution for everything, especially when it comes to GUI-based test automation. Although I do tend to write regularly about test automation using Selenium, I am not a big advocate of using UI-based test automation tools for all things test automation related myself (see for example this post). Therefore a lot of things covered in The A Word resonated with me and I found it a very pleasant read, although it is rather short at around 70-80 pages.

Book cover for The A Word - Alan Page

I wholeheartedly recommend this book to anybody that has anything to do with test automation, as Alan offers valuable food for thought for test engineers, test managers and all others doing or relying on automated tests. It might just make you think about whether you’re doing the right stuff and doing your stuff right..

The A Word is available on LeanPub. All profits go to the American Cancer Society, so that alone should be a reason to pick it up and leave a donation.