Are your test automation efforts holding you back?

In case you either have lived under a rock when it comes to testing trends (I don’t say you did), you haven’t paid any attention to what I’ve written in the past (this is much more likely), or possibly both, it seems that everybody in the testing world sort of agrees that automated testing can be highly beneficial or even essential to software development project success. Especially with ever shortening release cycles and less and less time per cycle available for testing.

But why are so many hours spent on creating automated tests nothing more than a waste of time? In this post, I would like to sum up a couple of reasons that cause automated test efforts to go bad. These are reasons I’ve seen in practice, that I had to work with and improve upon. And yes, I have certainly been guilty of doing some of the things I’ll mention in this post myself. That’s how I learned – the hard way.

Your flaky tests are sending mixed signals
One of the most prominent causes of time wasted with test automation is what I like to call ‘flaky test syndrome’. These are tests that pass some of the time and fail at other times. As a result, your test results cannot be trusted upon no matter what, which effectively renders your tests useless. There is a multitude of possible reasons that can cause your tests to become flaky, the most important being timing or synchronization issues. For example: a result is not stored in the database yet when a test step tries to check it, or an object is not yet present on a web page when your test tries to access it. There are a lot of possible solutions to these problems, and I won’t go into detail on every one of them, but what is paramount to test automation success is that your test results should be 100% trustworthy. That is, if a test fails, it should be due to a defect in your AUT (or possibly to an unforeseen test environment issue). Not due to you using Thread.sleep() throughout your tests.

fail

Your test results take too long to analyze
There is no use in having an automated test suite that is running all smoothly when your test results can’t be interpreted in a glance. If you can’t see in five seconds whether a test run is successful, you might want to update your reporting strategy. Also, if you can’t determine at least the general direction to the source of an error from your reporting in a couple of seconds, you might want to update your reporting strategy. If your general response to a failure appearing in your test reports is to rerun the test and start debugging or analyzing from there, you might want to update your reporting strategy.

You put too much trust put in your automated tests and test results
Even though I just stated that you should do everything within your power to make your automated tests as trustworthy as possible, this does by no means imply that you should trust on your automated tests alone when it comes to product quality. Automated tests (or better, checks) can only do so much in verifying whether your application meets all customer demands. So, for all of you having to deal with managers telling you to automate all of the tests, please, please do say NO.

Please don't do this!

You test everything through the user interface
I don’t think I have to tell this again, especially not to those of you that are regular readers of my blog, but I’m fairly skeptical when it comes to GUI-driven test automation. If anything, it makes your test runs take longer and it will probably result in more maintenance efforts as a result of any UI redesign. Setting up a proper UI-based automation framework also tends to take longer, so unless you’re really testing the user interface (which is not that often the case), please try and avoid using the UI to execute your tests as much as possible. It might take a little longer to figure out how to perform a test, but it will be much more efficient in the longer run.

Hopefully these points will trigger you to take a critical look at what you’re doing at any point in your test automation efforts. What also might help is to ask yourself what I think is the most important question when it comes to test automation (and to work in general, by the way):

“Am I being effective, or am I just being busy?”

Let’s make test automation better.

API testing best practices

This is the second post in a three-part series on API testing. The first post, which can be found here, provided a brief introduction on APIs, API testing and its relevance to the testing world. This post will feature some best practices for everybody involved in API testing. The third and final post will contain some useful code example for those of you looking to build your own automated API testing framework.

As was mentioned in the first post in this mini-series, API test execution differs from user interface-based testing since APIs are designed for communication between systems or system components rather than between a system or system component and a human being. This introduces some challenges to testing APIs, which I will try to tackle here.

API communication
Whereas a lot of testing on the user interface level is still done by hand (and rightfully so), this is impossible for API testing; you need a tool to communicate with APIs. There are a lot of tools available on the market. Some of the best known tools that are specifically targeted towards API testing are:

I have extensive experience with SOAtest and limited experience with SoapUI and can vouch for their usefulness in API testing.

Structuring tests
An API usually consists of several methods or operations that can be tested individually as well as through the setup of test scenarios. These test scenarios are usually constructed by stringing together multiple API calls. I suggest a three step approach to testing any API:

  1. Perform syntax testing of individual methods or operations
  2. Perform functional testing of individual methods or operations
  3. Construct and execute test scenarios

Syntax testing
This type of testing is performed to check whether the method or operation accepts correct input and rejects incorrect input. For example, syntax testing determines whether:

  • Leaving mandatory fields empty results in an error
  • Optional fields are accepted as expected
  • Filling fields with incorrect data types (for example, putting a text value into an integer field) results in an error

Functional testing of individual operations or methods
This type of testing is performed to check whether the method or operations performs its intended action correctly. For example:

  • Is calculation X performed correctly when calling operation / method Y with parameters A, B and C?
  • Is data stored correctly for future use when calling a setter method?
  • Does calling a getter method retrieve the correct information?

Test scenarios
Finally, when individual methods or operations have been tested successfully, method calls can be strung together to emulate business processes, For example:
API test scenarios
You see that this approach is not unlike user interface-based testing, where you first test individual components for their correct behaviour before executing end-to-end test scenarios.

API virtualization
When testing systems of interconnected components, the availability of some of the components required for testing might be limited at the time of testing (or they might not be available at all). Reasons for limited availability of a component might be:

  • The component itself is not yet developed
  • The component features insufficient or otherwise unusable test data
  • The component is shared with other teams and therefore cannot be freely used

In any of these cases, virtualization of the API can be a valuable solution, enabling testing to continue as planned. Several levels of API virtualization exist:

  • Mocking – This is normally done for code objects using a framework such as Mockito
  • Stubbing – this is used to create a simple emulation of an API, mostly used for SOAP and REST web services
  • Virtualization – This is the most advanced technique of the three, enabling the simulation of behaviour of complex components, including back-end database connectivity and transport protocols other than HTTP

Non-functional testing
As with all software components, APIs can (and should!) be tested for characteristics other than functionality. Some of the most important nonfunctional API test types that should at least be considered are:

  • Security testing – is the API accessible to those who are allowed to use it and inaccessible to those without the correct permissions?
  • Performance – Especially for web services: are the response times acceptable, even under a high load?
  • Interoperability and connectivity – can be API be consumed in the agreed manner and does it connect to other components as expected?

Most of the high-end API testing tools offer solutions for execution of these (and many other types of) nonfunctional test types.

More useful API testing best practices can again be found in the API Testing Dojo.

Do you have any additional API testing best practices you would like to share with the world?

Using the Page Object Design pattern in Selenium Webdriver

In a previous post, we have seen how using an object map significantly reduces the amount of maintenance needed on your Selenium scripts when your application under test is updated. Using this object map principle minimizes duplication of code on an object level. In this post, I will introduce an additional optimization pattern that minimizes code maintenance required on a higher level of abstraction.

Even though we have successfully stored object properties in a SPOM (a Single Point Of Maintenance), we still have to write code that handles these objects every time our script processes a given page including that object in our set of test scripts. If our set of test scripts requires processing a login form five times throughout the execution, we will need to include the code that handles the objects required to log in – a username field, a password field and a submit button, for example – five times as well. If the login page changes but the objects defined previously remain the same – for example, an extra checkbox is included to have a user agree to certain terms and conditions – we still need to update our scripts five times to include the processing of the checkbox.

To eliminate this code redundancy and maintenance burden, we are going to use a different approach known as the Page Object design pattern. This pattern uses page objects that represent a web page (or a form within a page, if applicable) to separate test code (validations and test flow logic, for example) from page specific code. It does so by making all actions that can be performed on a page available as methods of the page object representing that page.

So, assuming our test scripts needs to login twice (with different credentials), instead of this code:

public static void main(String args[]) {
	
	// start testing
	WebDriver driver = new HtmlUnitDriver();
		
	// first login
	driver.get("http://ourloginpage");
	driver.findElement(objMap.getLocator("loginUsername")).sendKeys("user1");
	driver.findElement(objMap.getLocator("loginPassword")).sendKeys("pass1");
	driver.findElement(objMap.getLocator("loginSubmitbutton")).click();
		
	// do stuff
		
	// second login
	driver.get("http://ourloginpage");
	driver.findElement(objMap.getLocator("loginUsername")).sendKeys("user2");
	driver.findElement(objMap.getLocator("loginPassword")).sendKeys("pass2");
	driver.findElement(objMap.getLocator("loginSubmitbutton")).click();
		
	// do more stuff
	
	// stop testing
	driver.close();
}

we would get

public static void main(String args[]) {
		
	// start testing
	WebDriver driver = new HtmlUnitDriver();
		
	// first login
	LoginPage lp = new LoginPage(driver);
	HomePage hp = lp.login("user1","pass1");
		
	// do stuff
		
	// second login
	LoginPage lp = new LoginPage(driver);
	HomePage hp = lp.login("user2","pass2");
		
	// do more stuff
		
	// stop testing
	driver.close();
}

Now, when we want to go to and handle our login page, we simply create a new instance of that page and call the login method to perform our login action. This method in turn returns a HomePage object, which is a representation of the page we get after a successful login action. A sample implementation of our LoginPage object could look as follows:

public class LoginPage {
	
	private final WebDriver driver;
	
	public LoginPage(WebDriver driver) {
		this.driver = driver;
		
		if(!driver.getTitle().equals("Login page")) {
			// we are not at the login page, go there
			driver.get("http://ourloginpage");
		}
	}
	
	public HomePage login(String username, String password) {
		driver.findElement(objMap.getLocator("loginUsername")).sendKeys("username");
		driver.findElement(objMap.getLocator("loginPassword")).sendKeys("password");
		driver.findElement(objMap.getLocator("loginSubmitbutton")).click();
		return new HomePage(driver);
	}	
}

It contains a constructor that opens the login page if it is not visible already. Alternatively, you could throw an exception and stop test execution whenever the login page is not the current page, depending on how you want your test to behave. Our LoginPage class also contains a login method that handles our login actions. If ever the login screen changes, we only need to update our test script once thanks to the proper use of page objects.

When the login action is completed successfully, our test returns a HomePage object. This class will be set up similar to the LoginPage class and provide methods specific to the page of our application under test it represents.

In case we also want to test an unsuccessful login, we simply add a method to our LoginPage class that executes the behaviour required:

public LoginPage incompleteLogin(String username) {
	driver.findElement(objMap.getLocator("loginUsername")).sendKeys("username");
	driver.findElement(objMap.getLocator("loginSubmitbutton")).click();
	return this;
}

This alternative login procedure does not enter a password. As a result, the user is not logged in and the login page remains visible, hence we return the current LoginPage object here instead of a HomePage object. If we want to test this type of incorrect login in our script, we simply call our new incorrectLogin method:

public static void main(String args[]) {
		
	// start testing
	WebDriver driver = new HtmlUnitDriver();
		
	// incorrect login
	LoginPage lp = new LoginPage(driver);
	lp = lp.incompleteLogin("user1");
	Assert.assertEquals("You forgot to type your password",lp.getError());
		
	//stop testing
	driver.quit();
}

The getError method is implemented in our LoginPage class as well:

public String getError() {
	return driver.findElement(objMap.getLocator("errorField")).getText();
}

This getError method is the result of another best practice. In order to keep your test code as much separated from your object code, always place your assertions outside of your page objects. If you need to validate specific values from a page, write methods that return them, as we did in the example above using the getError method.

To wrap things up, using the Page Object design pattern, we introduced another Single Point of Maintenance or SPOM in our Selenium test framework. This means even less maintenance required and higher ROI achieved!

An example Eclipse project using the pattern described above can be downloaded here.