Using wrapper methods for better error handling in Selenium

Frequent users of Selenium WebDriver might have come across the problem that when they’re using Selenium for testing responsive, dynamic web applications, timing and synchronization can be a major hurdle in creating useful and reliable automated tests. In my current project, I’m creating WebDriver-based automated tests for a web application that heavily relies on Javascript for page rendering and content retrieval and display, which makes waiting for the right element to be visible, clickable or whatever state it needs to be in significantly less than trivial.

Using the standard Selenium methods such as click() and sendKeys() directly in my script often results in failures as a result of the weg page element being invisible, disabled or reloaded (this last example resulting in a StaleElementReferenceException). Even using implicit waits when creating the WebDriver object didn’t always lead to stable tests, and I refuse to use Thread.sleep() (and so should you!). Also, I didn’t want to use individual WebDriverWait calls for every single object that needed to be waited on, since that introduces a lot of extra code that needs to be maintained. So I knew I had to do something more intelligent for my tests to be reliable – and therefore valuable as opposed to a waste of time and money.

Wrapper methods
The solution to this problem lies in using wrapper methods for the standard Selenium methods. So instead of doing this every time I need to perform a click:

(new WebDriverWait(driver, 10)).until(ExpectedConditions.elementToBeClickable("loginButton")));

I have created a wrapper method click() in a MyElements class that looks like this:

public static void click(WebDriver driver, By by) {
	(new WebDriverWait(driver, 10)).until(ExpectedConditions.elementToBeClickable(by));

Of course, the 10 second timeout is arbitrary and it’s better to replace this with some constant value. Now, every time I want to perform a click in my test I can simply call:,"loginButton");

which automatically performs a WebDriverWait, resulting in much stabler, better readable and maintainable scripts.

Extending your wrapper methods: error handling
Using wrapper methods for Selenium calls has the additional benefit of making error handling much more generic as well. For example, if you often encounter a StaleElementReferenceException (which those of you writing tests for responsive and dynamic web applications might be all too familiar with), you can simply handle this in your wrapper method and be done with it once and for all:

public static void click(WebDriver driver, By by) {
	try {
		(new WebDriverWait(driver, 10)).until(ExpectedConditions.elementToBeClickable(by));
	catch (StaleElementReferenceException sere) {
		// simply retry finding the element in the refreshed DOM

This is essentially my own version of this excellent solution for handling a StaleElementReferenceException. By using wrapper methods, you can easily handle any type of exception in its own specific way, which will improve your tests significantly.

Extending your wrapper methods: logging
Your wrapper methods also allow for better logging capabilities. For example, if you’re using the ExtentReports library for creating HTML reports for Selenium tests (as I do in my current project), you can create a log entry every time an object is not clickable after the WebDriverWait times out:

public static void click(WebDriver driver, By by) {
	try {
		(new WebDriverWait(driver, 10)).until(ExpectedConditions.elementToBeClickable(by));
	catch (StaleElementReferenceException sere) {
		// simply retry finding the element in the refreshed DOM
	catch (TimeoutException toe) {
		test.log(logStatus.Error, "Element identified by " + by.toString() + " was not clickable after 10 seconds");

Here, test is the ExtentTest object representing the log for the current test. See how easy this is?

Other possible benefits
There’s a couple of other benefits to using these wrapper methods, some of which I use in my current project as well, such as:

  • Automatically failing the JUnit / TestNG (or in my case NUnit) test whenever a specific error occurs by including an Assert.Fail();
  • Restoring or recreating specific conditions or a specific application state after an error occurs to prepare for the next test (e.g. ending the current user session)
  • Automatically restarting a test in case of a specific error

And I’m sure there are many more.

Wrapping this up (pun intended!), using these wrapper methods for standard Selenium calls can be very beneficial to your Selenium experience and those of your clients or employers. They certainly saved me a lot of unnecessary code and frustration over brittle tests..

Model-based testing with GraphWalker and Selenium – part 1

In this post I’d like to make a start exploring the possibilities and drawbacks that model-based testing (MBT) can offer to test automation in general and Selenium WebDriver in particular. I’ll be using GraphWalker as my MBT tool of choice, mostly because it’s open source, doesn’t have a steep learning curve and it’s available as a Java library, which makes integration with any Selenium project as simple as downloading the necessary .jar files and adding them to your test project.

I’ll use the login procedure for the Parabank demo application I’ve used so often before as an example. First, we need to model the login procedure using standard edges and nodes (which are called vertexes in Graphwalker):
A model of the Parabank login procedure
I’ve used yEd to draw this model, mostly because this tool offers the option to export the model in .graphml format, which will come in very handy at a later stage. I’ll go into that in another post.

After the browser has been started and the Parabank homepage is displayed, we can perform either a successful or an unsuccessful login. Unsuccessful logins take us to an error page, which offers us the possibility to try again, again with both positive and negative results as a possibility. A successful login takes us to the Accounts Overview page, from where we can log out. The application offers us many more options once we have performed a successful login, but for now I’ll leave these out of scope.

Next, we are going to ‘draw’, i.e. implement this model in GraphWalker. This is pretty straightforward:

private Model createModel() {

	// Create a new, empty model
	Model model = new Model();

	// Create vertexes (nodes)
	Vertex v_Start = new Vertex().setName("v_Start");
	Vertex v_HomePage = new Vertex().setName("v_HomePage");
	Vertex v_ErrorPage = new Vertex().setName("v_ErrorPage");
	Vertex v_AccountsOverviewPage = new Vertex().setName("v_AccountsOverviewPage");

	// Create edges
	Edge e_StartBrowser = new Edge()
	Edge e_LoginFailed = new Edge()
	Edge e_LoginFailedAgain = new Edge()
	Edge e_LoginSucceeded = new Edge()
	Edge e_LoginSucceededAfterFailure = new Edge()
	Edge e_Logout = new Edge()

	// Add vertexes to the model

	// Add edges to the model

	return model;

Easy, right? The only downside is that this is quite a lot of work, especially when your models get pretty large. It also leaves (a lot of) room for improvement when it comes to maintainability. We’ll get to that in a later post as well.

Theoretically, we could have GraphWalker go through this model and explore all vertexes and edges, but in order to do meaningful work we need to link them to concrete action steps performed on Parabank. A good and clean way to do this is to consider the following:

  • Edges are like state transitions in your model, so this is where the action (typing in text boxes, clicking on links, etc.) ends up.
  • Vertexes represent states in your model, so this is where the verifications (checks) need to be performed.

For example, this is the implementation of the e_StartBrowser edge:

WebDriver driver = null;

public void e_StartBrowser() {

	driver = new FirefoxDriver();
	driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);


By giving the method the same name as the actual edge, GraphWalker knows that it needs to execute this code snippet every time the e_StartBrowser node is encountered when running through the model. We can do the same for the vertexes, for example for the v_HomePage vertex representing the page we land on when we perform a successful login:

public void v_HomePage() {

	Assert.assertEquals(driver.getTitle(),"ParaBank | Welcome | Online Banking");

For simplicity, we’ll just do a check on the page title, but you’re free to add any type of check you wish, of course.

Finally, we need to tell GraphWalker to load and run the model. I prefer using TestNG for this:

public void fullCoverageTest() {

	// Create an instance of our model
	Model model = createModel();
	// Build the model (make it immutable) and give it to the execution context
	// Tell GraphWalker to run the model in a random fashion,
	// until every vertex is visited at least once.
	// This is called the stop condition.
	this.setPathGenerator(new RandomPath(new VertexCoverage(100)));
	// Get the starting vertex (v_Start)
	//Create the machine that will control the execution
	Machine machine = new SimpleMachine(this);
	// As long as the stop condition of the path generator is not fulfilled, hasNext will return true.
	while (machine.hasNextStep()) {
		//Execute the next step of the model.

The most important part in this test is the stop condition, which basically tells GraphWalker how to run the test and when to stop. In this example, I want to run the model and its corresponding Selenium actions randomly (within the restrictions implied by the model, of course) until every vertex has been hit at least once (100% vertex coverage). GraphWalker offers a lot of other stop conditions, such as running the model until:

  • Every edge has been hit at least once
  • A predefined vertex is hit
  • A predefined time period has passed (this is great for reliability tests)

A complete list of options for stop conditions can be found here in the GraphWalker documentation.

When we run this test, we can see in the console output that GraphWalker walks through our model randomly until the stop condition has been satisfied:
GraphWalker console outputAs usual when using TestNG, a nice little report has been created that tells us everything ran fine:
GraphWalker TestNG report - passNote that unlike what I did in the example above, it’s probably a good idea to use soft asserts when you’re using TestNG together with GraphWalker. Otherwise, TestNG and therefore GraphWalker will stop executing at the first check that fails. Unless that’s exactly what you want, of course.

In this post we’ve seen a very basic introduction into the possibilities of MBT using GraphWalker and how it can be applied to Selenium tests. In upcoming tests I’ll dive deeper into the possibilities GraphWalker provides, how it can be used to generate executable models from .graphml models, and how to make the most of the tool while keeping your tests clean and maintainable.

The Eclipse project containing the code I’ve used in this post can be downloaded here.

Using the ExtentReports TestNG listener in Selenium Page Object tests

In this (long overdue) post I would like to demonstrate how to use ExtentReports as a listener for TestNG. By doing so, you can use your regular TestNG assertions and still create nicely readable ExtentReports-based HTML reports. I’ll show you how to do this using a Selenium WebDriver test that uses the Page Object pattern.

Creating the Page Objects
First, we need to create some page objects that we are going to exercise during our tests. As usual, I’ll use the Parasoft Parabank demo application for this. My tests cover the login functionality of the application, so I have created Page Objects for the login page, the error page where you land when you perform an incorrect login and the homepage (the AccountsOverviewPage) where you end up when the login action is successful. For those of you that have read my past post on Selenium and TestNG, these Page Objects have been used in that example as well. The Page Object source files are included in the Eclipse project that you can download at the end of this post.

Creating the tests
To demonstrate the reporting functionality, I have created three tests:

  • A test that performs a successful login – this one passes
  • A test that performs an unsuccessful login, where the check on the error message returns passes – this one also passes
  • A test that performs an unsuccessful login, where the check on the error message returns fails – this one fails

The tests look like this:

public class LoginTest {
 WebDriver driver;
    public void setUp() {
        driver = new FirefoxDriver();
        driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
    @Test(description="Performs an unsuccessful login and checks the resulting error message (passes)")
    public void testFailingLogin(String username, String incorrectpassword) {
        LoginPage lp = new LoginPage(driver);
        ErrorPage ep = lp.incorrectLogin(username, incorrectpassword);
        Assert.assertEquals(ep.getErrorText(), "The username and password could not be verified.");
    @Test(description="Performs an unsuccessful login and checks the resulting error message (fails)")
    public void failingTest(String username, String incorrectpassword) {
        LoginPage lp = new LoginPage(driver);
        ErrorPage ep = lp.incorrectLogin(username, incorrectpassword);
        Assert.assertEquals(ep.getErrorText(), "This is not the error message you're looking for.");
    @Test(description="Performs a successful login and checks whether the Accounts Overview page is opened")
    public void testSuccessfulLogin(String username, String incorrectpassword) {
        LoginPage lp = new LoginPage(driver);
        AccountsOverviewPage aop = lp.correctLogin(username, incorrectpassword);
        Assert.assertEquals(aop.isAt(), true);
    public void tearDown() {

Pretty straightforward, right? I think this is a clear example of why using Page Objects and having the right Page Object methods make writing and maintaining tests a breeze.

Creating the ExtentReports TestNG listener
Next, we need to define the TestNG listener that creates the ExtentReports reports during test execution:

public class ExtentReporterNG implements IReporter {
    private ExtentReports extent;
    public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> suites, String outputDirectory) {
        extent = new ExtentReports(outputDirectory + File.separator + "ExtentReportTestNG.html", true);
        for (ISuite suite : suites) {
            Map<String, ISuiteResult> result = suite.getResults();
            for (ISuiteResult r : result.values()) {
                ITestContext context = r.getTestContext();
                buildTestNodes(context.getPassedTests(), LogStatus.PASS);
                buildTestNodes(context.getFailedTests(), LogStatus.FAIL);
                buildTestNodes(context.getSkippedTests(), LogStatus.SKIP);
    private void buildTestNodes(IResultMap tests, LogStatus status) {
        ExtentTest test;
        if (tests.size() > 0) {
            for (ITestResult result : tests.getAllResults()) {
                test = extent.startTest(result.getMethod().getMethodName());
                test.getTest().startedTime = getTime(result.getStartMillis());
                test.getTest().endedTime = getTime(result.getEndMillis());
                for (String group : result.getMethod().getGroups())
                String message = "Test " + status.toString().toLowerCase() + "ed";
                if (result.getThrowable() != null)
                    message = result.getThrowable().getMessage();
                test.log(status, message);
    private Date getTime(long millis) {
        Calendar calendar = Calendar.getInstance();
        return calendar.getTime();        

This listener will create an ExtentReports report called ExtentReportTestNG.html in the default TestNG output folder test-output. This report lists all passed tests, then all failed tests and finally all tests that were skipped during execution. There’s no need to add specific ExtentReports log statements to your tests.

Running the test and reviewing the results
To run the tests, we need to define a testng.xml file that enables the listener we just created and runs all tests we want to run:

<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE suite SYSTEM "" >

<suite name="Parabank login test suite" verbose="1">
		<listener class-name="com.ontestautomation.extentreports.listener.ExtentReporterNG" />
	<parameter name="correctusername" value="john" />
	<parameter name="correctpassword" value="demo" />
	<parameter name="incorrectusername" value="wrong" />
	<parameter name="incorrectpassword" value="credentials" />
	<test name="Login tests">
			<package name="com.ontestautomation.extentreports.tests" />

When we run our test suite using this testng.xml file, all tests in the com.ontestautomation.extentreports.tests package are run and an ExtentReports HTML report is created in the default test-output folder. The resulting report can be seen here.

More examples
More examples on how to use the ExtentReports listener for TestNG can be found on the ExtentReports website.

The Eclipse project I have created to demonstrate the above can be downloaded here.