Are your test automation efforts holding you back?

In case you either have lived under a rock when it comes to testing trends (I don’t say you did), you haven’t paid any attention to what I’ve written in the past (this is much more likely), or possibly both, it seems that everybody in the testing world sort of agrees that automated testing can be highly beneficial or even essential to software development project success. Especially with ever shortening release cycles and less and less time per cycle available for testing.

But why are so many hours spent on creating automated tests nothing more than a waste of time? In this post, I would like to sum up a couple of reasons that cause automated test efforts to go bad. These are reasons I’ve seen in practice, that I had to work with and improve upon. And yes, I have certainly been guilty of doing some of the things I’ll mention in this post myself. That’s how I learned – the hard way.

Your flaky tests are sending mixed signals
One of the most prominent causes of time wasted with test automation is what I like to call ‘flaky test syndrome’. These are tests that pass some of the time and fail at other times. As a result, your test results cannot be trusted upon no matter what, which effectively renders your tests useless. There is a multitude of possible reasons that can cause your tests to become flaky, the most important being timing or synchronization issues. For example: a result is not stored in the database yet when a test step tries to check it, or an object is not yet present on a web page when your test tries to access it. There are a lot of possible solutions to these problems, and I won’t go into detail on every one of them, but what is paramount to test automation success is that your test results should be 100% trustworthy. That is, if a test fails, it should be due to a defect in your AUT (or possibly to an unforeseen test environment issue). Not due to you using Thread.sleep() throughout your tests.

fail

Your test results take too long to analyze
There is no use in having an automated test suite that is running all smoothly when your test results can’t be interpreted in a glance. If you can’t see in five seconds whether a test run is successful, you might want to update your reporting strategy. Also, if you can’t determine at least the general direction to the source of an error from your reporting in a couple of seconds, you might want to update your reporting strategy. If your general response to a failure appearing in your test reports is to rerun the test and start debugging or analyzing from there, you might want to update your reporting strategy.

You put too much trust put in your automated tests and test results
Even though I just stated that you should do everything within your power to make your automated tests as trustworthy as possible, this does by no means imply that you should trust on your automated tests alone when it comes to product quality. Automated tests (or better, checks) can only do so much in verifying whether your application meets all customer demands. So, for all of you having to deal with managers telling you to automate all of the tests, please, please do say NO.

Please don't do this!

You test everything through the user interface
I don’t think I have to tell this again, especially not to those of you that are regular readers of my blog, but I’m fairly skeptical when it comes to GUI-driven test automation. If anything, it makes your test runs take longer and it will probably result in more maintenance efforts as a result of any UI redesign. Setting up a proper UI-based automation framework also tends to take longer, so unless you’re really testing the user interface (which is not that often the case), please try and avoid using the UI to execute your tests as much as possible. It might take a little longer to figure out how to perform a test, but it will be much more efficient in the longer run.

Hopefully these points will trigger you to take a critical look at what you’re doing at any point in your test automation efforts. What also might help is to ask yourself what I think is the most important question when it comes to test automation (and to work in general, by the way):

“Am I being effective, or am I just being busy?”

Let’s make test automation better.

Should virtual assets contain business logic?

A while ago I read an interesting discussion in the LinkedIn Service Virtualization group that centered around the question whether or not to include business logic in virtual assets when implementing service virtualization. The general consensus was a clear NO, you should never include business logic in your virtual assets. The reason for this is that it makes them unnecessarily complex and increases maintenance efforts whenever the implementation (or even just the interface) of the service being simulated changes. In general, it is considered better to make your virtual assets purely data driven, and to keep full control over the data that is sent to the virtual asset, so that you can easily manage the data that is to be returned by it.

Data driven virtual assets

In general, I agree with the opinions voiced there. It is considered good practice to keep virtual assets as simple as possible, and an important way to achieve this is by adopting the aforementioned data driven approach. However, in a recent project where I was working on a service virtualization implementation I had no other choice than to implement a reasonable amount of business logic into some of the virtual assets.

Why? Let’s have a look.

The organization that implemented service virtualization was a central administrative body for higher education in the Netherlands, let’s call them C (for Client). Students can use C’s core application to register for university and college programs, and all information exchange between educational institutions and government institutions responsible for education went through this application as well. So, on one side we have the educational institutions, on the other side we have the government (so to speak). The virtual assets implemented simulated the interface between C and the government. The test environment for the government interface was used by the C test team, but they also provided their own test environment to the testers at the universities and colleges who wanted to do end-to-end tests:

The test situation

As there were a lot of different universities (around 20) using the C test environment for their end-to-end tests, coordinating and centralizing the test data they were using was a logistical nightmare. They each received a number of test students they could use for their tests, but apart from that, C had no control over the amount and nature of tests that were performed through their test environment. As a result, there was no way to determine the amount and state of test data sent to and from the virtual asset at any given point in time.

Moreover, the virtual asset needed to remember (part of) the test data that it received. These data were required in the calculation of the response to future requests. For example, one of the operations supported by the virtual asset was determining whether a student was eligible for a certain type of financial support. This depended on the number and type of education he or she had previously enrolled in and finished.

To enable the virtual asset to remember received data (and essentially become stateful), we hooked up a simple database to the virtual asset and stored the data entity updates we received, such as student enrollments, students finishing an education, etc., in that database. These data were then used to determine any response values that depended on both data and business logic. Simple data retrieval requests were of course implemented in data driven manner.

This deliberate choice to implement a certain – but definitely limited – amount of business logic in the virtual asset enabled not only company C, but also the organizations that depended on C’s test environments, to successfully perform realistic end-to-end test scenarios.

But…

Once the people at company C saw the power of service virtualization, they kept filing requests for increasingly complex business logic to be implemented in the virtual asset. As any good service virtualization consultant should do, I was very cautious in implementing these features. My main argument was that Icould implement these features rather easily, but someone eventually would have to maintain them, at which point I would be long gone. In the end, we found an agreeable mix of realistic behaviour (with the added complexity) and maintainability. They were happy because they had a virtual asset they could use and provide to their partners, I was happy because I delivered a successful and maintainable service virtualization implementation.

So, yes, in most cases you should make sure your virtual assets are purely data driven when you’re implementing service virtualization. However, in some situations, implementing a bit of business logic might just make the difference between an average and a great implementation. Just remember to keep it to the bare minimum.

Using the Page Object Model pattern in Selenium + TestNG tests

After having introduced the Selenium + TestNG combination in my previous post, I would like to show you how to apply the Page Object Model, an often used method for improving maintainability of Selenium tests, to this setup. To do so, we need to accomplish the following steps:

  • Create Page Objects representing pages of a web application that we want to test
  • Create methods for these Page Objects that represent actions we want to perform within the pages that they represent
  • Create tests that perform these actions in the required order and performs checks that make up the test scenario
  • Run the tests as TestNG tests and inspect the results

Creating Page Objects for our test application
For this purpose, again I use the ParaBank demo application that can be found here. I’ve narrowed the scope of my tests down to just three of the pages in this application: the login page, the home page (where you end up after a successful login) and an error page (where you land after a failed login attempt). As an example, this is the code for the login page:

package com.ontestautomation.seleniumtestngpom.pages;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;

public class LoginPage {
	
	private WebDriver driver;
	
	public LoginPage(WebDriver driver) {
		
		this.driver = driver;
		
		if(!driver.getTitle().equals("ParaBank | Welcome | Online Banking")) {
			driver.get("http://parabank.parasoft.com");
		}		
	}
	
	public ErrorPage incorrectLogin(String username, String password) {
		
		driver.findElement(By.name("username")).sendKeys(username);
		driver.findElement(By.name("password")).sendKeys(password);
		driver.findElement(By.xpath("//input[@value='Log In']")).click();
		return new ErrorPage(driver);
	}
	
	public HomePage correctLogin(String username, String password) {
		
		driver.findElement(By.name("username")).sendKeys(username);
		driver.findElement(By.name("password")).sendKeys(password);
		driver.findElement(By.xpath("//input[@value='Log In']")).click();
		return new HomePage(driver);
	}
}

It contains a constructor that returns a new instance of the LoginPage object as well as two methods that we can use in our tests: incorrectLogin, which sends us to the error page and correctLogin, which sends us to the home page. Likewise, I’ve constructed Page Objects for these two pages as well. A link to those implementations can be found at the end of this post.

Note that this code snippet isn’t optimized for maintainability – I used direct references to element properties rather than some sort of element-level abstraction, such as an Object Repository.

Creating methods that perform actions on the Page Objects
You’ve seen these for the login page in the code sample above. I’ve included similar methods for the other two pages. A good example can be seen in the implementation of the error page Page Object:

package com.ontestautomation.seleniumtestngpom.pages;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;

public class ErrorPage {
	
	private WebDriver driver;
	
	public ErrorPage(WebDriver driver) {
		
		this.driver = driver;
	}
	
	public String getErrorText() {
		
		return driver.findElement(By.className("error")).getText();
	}
}

By implementing a getErrorText method to retrieve the error message that is displayed on the error page, we can call this method in our actual test script. It is considered good practice to separate the implementation of your Page Objects from the actual assertions that are performed in your test script (separation of responsibilities). If you need to perform additional checks, just add a method that returns the actual value displayed on the screen to the associated page object and add assertions to the scripts where this check needs to be performed.

Create tests that perform the required actions and execute the required checks
Now that we have created both the page objects and the methods that we want to use for the checks in our test scripts, it’s time to create these test scripts. This is again pretty straightforward, as this example shows (imports removed for brevity):

package com.ontestautomation.seleniumtestngpom.tests;

public class TestNGPOM {
	
	WebDriver driver;
	
	@BeforeSuite
	public void setUp() {
		
		driver = new FirefoxDriver();
		driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
	}
	
	@Parameters({"username","incorrectpassword"})
	@Test(description="Performs an unsuccessful login and checks the resulting error message")
	public void testLoginNOK(String username, String incorrectpassword) {
		
		LoginPage lp = new LoginPage(driver);
		ErrorPage ep = lp.incorrectLogin(username, incorrectpassword);
		Assert.assertEquals(ep.getErrorText(), "The username and password could not be verified.");
	}
	
	@AfterSuite
	public void tearDown() {
		
		driver.quit();
	}
}

Note the use of the page objects and the check being performed using methods in these page object implementations – in this case the getErrorText method in the error page page object.

As we have designed our tests as Selenium + TestNG tests, we also need to define a testng.xml file that defines which tests we need to run and what parameter values the parameterized testLoginOK test takes. Again, see my previous post for more details.

<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
 
<suite name="My first TestNG test suite" verbose="1" >
  <parameter name="username" value="john"/>
  <parameter name="password" value="demo"/>
  <test name="Login tests">
    <packages>
      <package name="com.ontestautomation.seleniumtestngpom.tests" />
   </packages>
 </test>
</suite>

Run the tests as TestNG tests and inspect the results
Finally, we can run our tests again by right-clicking on the testng.xml file in the Package Explorer and selecting ‘Run As > TestNG Suite’. After test execution has finished, the test results will appear in the ‘Results of running suite’ tab in Eclipse. Again, please note that using meaningful names for tests and test suites in the testng.xml file make these results much easier to read and interpret.

TestNG test results in Eclipse

An extended HTML report can be found in the test-output subdirectory of your project:

TestNG HTML test results

The Eclipse project I have used for the example described in this post, including a sample HTML report as generated by TestNG, can be downloaded here.