How I would approach creating automated user interface-driven tests

One of the best, yet also most time-consuming parts of running a blog is answering questions from readers. Questions are asked here on the site (through comments), but also frequently via email. I love interacting with you, my readers, and I’ll do whatever I can to help you, but there’s one thing even better than answering questions: preventing them. Specifically, I get a lot of questions asking me how I’d approach creating a ‘framework’ (I prefer ‘solution’, but more on that at the end of this post) for automated end-to-end user interface-driven tests, or checks, as they should be called.

Also, I get a lot of questions related to blog posts I wrote a year or more ago, of which I do no longer support the solution or approach I wrote about at the time. I’ve put up an ‘old post’ notice on some of them (example here), but that doesn’t prevent readers from asking questions on that subject. Which is fine, but sometimes a little frustrating.

As a means of providing you all with the answers to some of the most frequently asked questions, I’ve spent some time creating a blueprint for a solution for automated user interface-driven tests for you to look at. It illustrates how I usually approach creating these tests, and how I deal with things like test specification and reporting. The entire solution is available for your reference on my GitHub page. Feel free to copy, share, read and maybe learn something from it.

Since I’m a strong believer in answering the ‘why?’ in test automation as well as the ‘how?’, I’ll explain my design and tool choices in this post too. That does not mean that you should simply copy my choices, or that they’re the right choices in the first place, but it might give you a little insight into the thought process behind them.

Tools included
First of all, the language. Even though almost all the technical posts I’ve written on this blog so far have covered Java tools, this solution is written in C#. Why? Because I’ve come to like the .NET ecosystem over the last year and a half, and because doing so gives me a chance to broaden my skill set a little more. Don’t worry, Java and C# are pretty close to one another, especially on this level, so creating a similar setup in Java shouldn’t be a problem for you.

Now then, the tools:

  • For browser automation, I’ve decided to go with the obvious option: Selenium WebDriver. Why? Because it’s what I know. Also, Selenium has been the go-to standard for browser automation for a number of years now, and it doesn’t look like that’s going to change anytime soon.
  • For test and test data specification, I’ve chosen SpecFlow, which is basically Cucumber for .NET. Why? Because it gives you a means of describing what your tests do in a business readable format. Also, it comes with native support for data driven testing (through scenario outlines), which is among the first features I look for when evaluating any test automation tool. So, as you can see, even if you’re not doing BDD (which I don’t), using SpecFlow has its benefits. I prefer using SpecFlow (or Cucumber for that matter) over approaches such as specifying your tests in Excel. Dealing with workbooks, sheets, rows and cells is just too cumbersome.
  • For running tests and performing assertions, I decided to go with NUnit. Why? It’s basically the .NET sibling of JUnit for Java, and it integrates very nicely with SpecFlow. I have no real experience with alternatives (such as MSTest), so it’s not really a well-argumented decision, but it works perfectly, and that’s what counts.
  • Finally, for reporting, I’ve added ExtentReports. Why? If you’ve been reading this blog for a while, you know that I’m a big fan of this tool, because it creates great-looking HTML reports with just a couple of lines of code. It has built-in support for including screenshots as well, which makes it perfectly suited for the user interface-driven tests I want to perform. Much better and easier than trying to build your own reports.

Top level: SpecFlow feature and scenarios
Since we’re using SpecFlow here, a test is specified as a number of scenarios written in the Gherkin Given-When-Then format. These scenarios are grouped together in features, with every feature describing a specific piece of functionality or behavior of our application under test. I’m running my example tests against the Parasoft demo application ParaBank, and the feature I’m using tests the login function of this application:

Feature: Login
	In order to access restricted site options
	As a registered ParaBank user
	I want to be able to log in

Scenario Outline: Login using valid credentials
	Given I have a registered user <firstname> with username <username> and password <password>
	And he is on the ParaBank home page
	When he logs in using his credentials
	Then he should land on the Accounts Overview page
	| firstname | username | password |
	| John      | john     | demo     |
	| Bob       | parasoft | demo     |
	| Alex      | alex     | demo     |

Scenario:  Login using incorrect password
	Given I have a registered user John with username john and password demo
	And he is on the ParaBank home page
	When he logs in using an invalid password
	Then he should see an error message stating that the login request was denied

Level 2: Step definitions
Not only should features and scenarios be business readable, I think it’s a good idea that your automation code is as clean and as readable as possible too. A step definition, which is basically the implementation of a single step (line) in a scenario, should convey what it does through the use of fluent code and descriptive method names. As an example, here’s the implementation of the When he logs in using his credentials step:

[When(@"he logs in using his credentials")]
public void WhenHeLogsInUsingHisCredentials()
    User user = ScenarioContext.Current.Get<User>();

    new LoginPage().

Quite readable, right? I retrieve a previously stored object of type User (the SpecFlow ScenarioContext and FeatureContext are really useful for this purpose) and use its Username and Password properties to perform a login action on the LoginPage. Voila: fluent code, clear method names and therefore readable, understandable and maintainable code.

Level 3: Page Objects
I’m using the Page Object pattern for reusability purposes. Every action I can perform on the page (setting the username, setting the password, etc.) has its own method. Furthermore, I have each method return a Page Object, which allows me to write the fluent code we’ve seen in the previous step. This is not always possible, though. The ClickLoginButton method, for example, does not return a Page Object, since clicking the button might redirect the user to either of two different pages: the Accounts Overview page (in case of a successful login) or the Login Error page (in case something goes wrong). Here’s the implementation of the SetUsername and SetPassword methods:

public LoginPage SetUsername(string username)
    OTAElements.SendKeys(_driver, textfieldUsername, username);
    return this;

public LoginPage SetPassword(string password)
    OTAElements.SendKeys(_driver, textfieldPassword, password);
    return this;

In the next step, we’ll see how I’ve implemented the SendKeys method as well as the reasoning behind using this wrapper method instead of calling the Selenium API directly. First, let’s take a look at how I identified objects. A commonly used method for this is the PageFactory pattern, but I chose not to use that. It’s not that useful to me, plus I don’t particularly like having to use two lines and possibly an additional third, blank line (for readability) for each element on a page. Instead, I simply declare the appropriate By locator at the top of the page so that I can pass it to my wrapper method:

private By textfieldUsername = By.Name("username");
private By textfieldPassword = By.Name("password");
private By buttonLogin = By.XPath("//input[@value='Log In']");

In this example, I also used the LoadableComponent pattern for better page loading and error handling. Again, whether or not you use it is entirely up to you, but I think it can be a very useful pattern in some cases. I don’t use it in my current project though, since there’s no need for it (yet). Feel free to copy it, or leave it out. Whatever works best for you.

Level 4: Wrapper methods
As I said above, I prefer writing wrapper methods around the Selenium API, instead of calling Selenium methods directly in my Page Objects. Why? Because in that way, I can do all error handling and additional logging in a single place, instead of having to think about exceptions and reporting in each individual page object method. In the example above, you could see I created a class OTAElements, which conveniently contains wrapper methods for individual element manipulation. My SendKeys wrapper implementation looks like this, for example:

public static void SendKeys(IWebDriver driver, By by, string valueToType, bool inputValidation = false)
        new WebDriverWait(driver, TimeSpan.FromSeconds(Constants.DefaultTimeout)).Until(ExpectedConditions.ElementIsVisible(by));
    catch (Exception ex) when (ex is NoSuchElementException || ex is WebDriverTimeoutException)
        ExtentTest test = ScenarioContext.Current.Get<ExtentTest>();
        test.Error("Could not perform SendKeys on element identified by " + by.ToString() + " after " + Constants.DefaultTimeout.ToString() + " seconds", MediaEntityBuilder.CreateScreenCaptureFromPath(ReportingMethods.CreateScreenshot(driver)).Build());
    catch (Exception ex) when (ex is StaleElementReferenceException)
        // find element again and retry
        new WebDriverWait(driver, TimeSpan.FromSeconds(Constants.DefaultTimeout)).Until(ExpectedConditions.ElementIsVisible(by));

These wrapper methods are a little more complex, but this allows me to keep the rest of the code squeaky clean. And those other parts (the page objects, the step definitions) is where 95% of the maintenance happens once the wrapper method implementation is finished. All exception handling and additional logging (more on that soon) is done inside the wrapper, but once you’ve got that covered, creating additional page objects and step definitions is really, really straightforward.

I have also created wrapper methods for the various checks performed in my tests, for exactly the same reasons: keeping the code as clean as possible and perform additional logging when desired:

public static void AssertTrue(IWebDriver driver, ExtentTest extentTest, bool assertedValue, string reportingMessage)
    catch (AssertionException)
        extentTest.Fail("Failure occurred when executing check '" + reportingMessage + "'", MediaEntityBuilder.CreateScreenCaptureFromPath(ReportingMethods.CreateScreenshot(driver)).Build());

Even though NUnit produces its own test reports, I’ve fallen in love with ExtentReports a while ago, and since then I’ve been using it in a lot of different projects. It integrates seamlessly with the solution setup described above, it’s highly configurable and its reports just look good. Especially the ability to include screenshots is a blessing for the user interface-driven tests we’re performing here. I’ve also included (through the LogTraceListener class, which is called in the project’s App.config) the option of adding the current SpecFlow scenario step to the ExtentReports report. This provides a valuable link between test results and test (or rather, application behavior) specification. You can see the report generated by executing the Login feature here.

Extending the solution
In the weeks to come, I’ll probably add some tweaks and improvements to the solution described here. One major thing I’d like to add is including an example of testing a RESTful API. So far, I haven’t found a really elegant solution for C# yet. I’m looking for something similar to what REST Assured does in Java. I’ve tried RestSharp, but that’s a generic HTTP client, not a library written for testing, resulting in a little less readable code if you do want to use it for writing tests. I guess I’ll have to keep looking, and any hints and tips are highly welcome!

I’d also like to add some more complex test scenarios, to show you how this solution evolves when the number of features, scenarios, step definitions and page objects grows, as they tend to do in real life projects. I’ve used pretty much the same setup in actual projects before, so I know it holds up well, but the proof of the pudding.. You know.

Wrapping up
As I’ve said at the beginning of this post, I’ve created this project to show you how I would approach the task of creating a solution for clients that want to use automated user interface-driven tests as part of their testing process. Your opinions on how I implemented things might differ, but that’s OK. I’d love to hear suggestions on how things can be improved and extended. I’d like to stress that this code is not a framework. Again, I can’t stand that word. This approach has proven to be a solution to real world problems for real world clients in the past. And it continues to do so in projects I’m currently involved in.

I wrote this code to show you (and I admit, my potential clients too) how I would approach creating these tests. I don’t claim it’s the best solution. But it works for me, and it has worked for my clients.

Again, you can find the project on my GitHub page. I hope it’ll prove useful to you.

Improving your craftsmanship through conferences

In an upcoming TechBeacon article I recently wrapped up, I’m talking about how to create a team of test automation crafts(wo-)men. One of the tips I give is, as a manager looking to build such a team, to let your craftsmen attend conferences regularly. For me, attending conferences is one of the most important and influential ways to extend my craftsmanship.

As a delegate
As a conference delegate (visitor), there are several ways to benefit from the experience:

  • Get inspired by the talks and workshops. A good conference provides a mix of keynote talks from established craftsmen, as well as talks and experience reports from less experienced speakers. These are a good way to get some fresh views on your field of work or, in some cases, on life in general. What I also like in a conference is the offering of hands-on workshops. These are a good way of getting to know or of diving deeper into a tool that might just make your life a lot easier.
  • Interact with fellow craftsmen. Conferences are an excellent opportunity to get to know people in similar roles from other organizations, or even from another country. As with life in general: you never know who you’re going to meet, or what comes out of a seemingly random encounter at a conference. I’ve met people at conferences years ago that I’m still in touch with today. And since the conference attendee list often includes representatives from other organizations, you might even land your next job after an informal first encounter at a conference!
  • See what is available on the tools market. Larger conferences often include a sponsor exhibit, where tool vendors show the latest versions of their offerings. If you’re looking for a solution for a problem you have, most of these vendors are happy to talk to you and give you a demo of what they can do for you.

As a speaker
One step up from being a conference attendee is to start presenting at a conference (or two, or ten) yourself. Even if it might be a bit daunting at first, there’s much to gain from even a single public speaking performance.

  • Building your personal brand. Everybody has a personal brand. I didn’t realize this until fairly recently, but it is a fact. Delivering talks is a great way to show people what you know, what you stand for and what your ideas on your craft are, and in that way build your brand. And when people are looking for someone to work with or for them, a well-crafted personal branding will get you to the top of their wish list.
  • Make sure you understand what you’re doing. An often underrated aspect of presenting is that you have to make sure that you know what you’re talking about. As wise men have said before ‘you don’t truly understand a subject until you’re able to explain it to your mother’ (or something to that extent). Being able to give a clear, comprehensive and nicely flowing talk on a subject is probably the best proof that you truly know what it is you’re doing.

What I’ve been up to recently
After a fairly slow winter (at least in terms of conferences and presentations), the pace is slowly starting to pick up again. Last week, I delivered my new talk on trust in test automation for the first time, to a crowd of just over a hundred people at TestNet, the Dutch organization for professional testers. For a first time, I think it went pretty well, and I’m looking forward to delivering this talk more often in the months to come. I’ve submitted the same talk to a number of other conferences, and I’m very much looking forward to the response from the respective organizing committees.

It’s also less than two months until my workshop on REST Assured and WireMock at the Romanian Testing Conference. Another event that I’m very much looking forward to! It’ll be my second time speaking abroad (and the first time hosting a workshop abroad), and I’m sure it’ll be a fantastic experience after all the good things I heard from last year’s event. I was also pleasantly surprised to hear that the workshop is already sold out, so it’ll be a full house for me.

Finally, next to my blogging efforts on this site, I’ve been steadily publishing articles for TechBeacon (see my contributor profile here) and I’ve also recently published my second article on StickyMinds (see my user profile here). If you happen to have a few spare minutes and feel like reading my articles, I’d love to hear what you think of them!

More troubles with test data

If managing test data in complex end-to-end automated test scenarios is an art, I’m certainly not an artist (yet). If it is a science, I’m still looking for the solution to the problem. At this moment, I’m not even sure which of the two it is, really..

The project
Some time ago, I wrote a blog post on different strategies to manage test data in end-to-end test automation. A couple of months down the road, and we’re still struggling. We are faced with the task of writing automated user interface-driven tests for a complex application. The user interface in itself isn’t that complex, and our tool of choice handles it decently. So far, so good.

As with all test automation projects I work on, I’d like to keep the end goal in mind. For now, running the automated end-to-end tests once every fortnight (at the end of a sprint) is good enough. I know, don’t ask, but the client is satisfied with that at the moment. Still, I’d like to create a test automation solution that can be run on demand. If that’s once every two weeks, all right. It should also be possible to run the test suite ten times per day, though. Shorten the feedback loops and all that.

The test data challenge
The real challenge with this project, as with a number of other projects I’ve worked on in the past, is in ensuring that the test data required to successfully run the tests is present and in the right state at all times. There are a number of complicating factors that we need to deal (or live) with:

  • The data model is fairly complex, with a large number of data entities and relations between them. What makes it really tough is that there is nobody available that completely understands it. I don’t want to mess around assuming that the data model looks a certain way.
  • As of now, there is no on demand back-up restoration procedure. Database back-ups are made daily in the test environment, but restoring them is a manual task at the moment, blocking us from recreating a known test data state whenever we want to.
  • There is no API that makes it easy for us to inject and remove specific data entities. All we have is the user interface, which results in long setup times during test execution, and direct database access, which isn’t of real use since we don’t know the data model details.

Our current solution
Since we haven’t figured out a proper way to manage test data for this project yet, we’re dealing with it the easiest way available: by simply creating the test data we need for a given test at the start of that test. I’ve mentioned the downsides of this approach in my previous post on managing test data (again, here it is), but it’s all we can do for now. We’re still in the early stages of automation, so it’s not something that’s holding us back to much, but all parties involved realize that this is not a sustainable solution for the longer term.

The way forward
What we’re looking at now is an approach that looks roughly like this:

  1. A database backup that contains all test data required is created with every new release.
  2. We are given permission to restore that database backup on demand, a process that takes a couple of minutes and currently is not yet automated.
  3. We are given access to a job that installs the latest data model configuration (this changes often, sometimes multiple times per day) to ensure that everything is up to date.
  4. We recreate the test data database manually before each regression test run.

This looks like the best possible solution at the moment, given the available knowledge and resources. There are still some things I’d like to improve in the long run, though:

  • I’d like database recreation and configuration to be a fully automated process, so it can more easily be integrated into the testing and deployment process.
  • There’s still the part where we need to make sure that the test data set is up to date. As the application evolves, so do our test cases, and somebody needs to make sure that the backup we use for testing contains all the required test data.

As you can see, we’re making progress, but it is slow. It makes me realize that managing test data for these complex automation projects is possibly the hardest problem I’ve encountered so far in my career. There’s no one-stop solution for it, either. So much depends on the availability of technical hooks, domain knowledge and resources at the client side.

On the up side, last week I met with a couple of fellow engineers from a testing services and solutions provider, just to pick their brain on this test data issue. They said they have encountered the same problem with their clients as well, and were working on what could be a solution to this problem. They too realize that it’ll never be a 100% solution to all test data issues for all organizations, but they’re confident that they can provide them (and consultants like myself) with a big step forwards. I haven’t heard too many details, but I know they know what they’re talking about, so there might be some light at the end of the tunnel! We’re going to look into a way to collaborate on this solution, which I am pretty excited about, since I’d love to have something in my tool belt that helps my clients tackle their test data issues. To be continued!