Create your own HTML report from Selenium tests

As I am learning more and more about using Selenium Webdriver efficiently (experiences I try to share through this blog), I’m slowly turning away from my original standpoint that user interface-based test automation is not for me. I am really starting to appreciate the power of Selenium, especially when you use proper test automation framework design patterns such as the Page Object pattern I wrote about earlier. However, Selenium by default lacks one vital aspect of what makes a good test automation tool to me: proper reporting options. Luckily, as Selenium is so open, there’s lots of ways to build custom reporting yourself. This post shows one possible approach.

My approach
Personally, I prefer HTML reports as they are highly customizable, relatively easy to build and can be easily distributed to other project team members. To build a nice HTML report, I use the following two step approach:

  • Execute tests and gather statistics about validations executed
  • Create HTML report from these statistics after test execution has finished

In this post I’ll use the following test script as an example. I created a page with five HTML text fields, for which I am going to validate the default text. Nothing really realistic, but remember it’s only used to illustrate my reporting concept.

public static void main (String args[]) {
		
	WebDriver driver = new HtmlUnitDriver();
	driver.get("http://www.ontestautomation.com/files/report_test.html");
		
	for (int i = 1; i <=5; i++) {
		WebElement el = driver.findElement(By.id("textfield" + Integer.toString(i)));
		Assert.assertEquals(el.getAttribute("value"), "Text field " + Integer.toString(i));
	}
		
	driver.close();	
}

When we run this script, one error is generated as text field 4 contains a different default value (go to the URL in the script to see for yourself).

Custom reporting functions
To be able to create a nice HTML report, we first need some custom reporting functions that store test results in a way we can reuse them later to generate our report. To achieve this, I created a report method in a Reporter class that stores validation results in a simple List:

public static void report(String actualValue,String expectedValue) {
	if(actualValue.equals(expectedValue)) {
		Result r = new Result("Pass","Actual value '" + actualValue + "' matches expected value");
		details.add(r);
	} else {
		Result r = new Result("Fail","Actual value '" + actualValue + "' does not match expected value '" + expectedValue + "'");
		details.add(r);
	}
}

The Result object is a simple class with three class variables: result (which is either Pass or Fail), a resultText (which is a custom description) and a URL for a screenshot (the use of which we will see later).

For every test we execute in our Selenium script, instead of using the standard TestNG / JUnit assertions, we use our own reporting function. You might want to throw an error as well when a validation fails, but I’m happy just to write it to my report and let test execution continue.

After test execution is finished, we are going to write the test results we gathered to a file. For this, I used an extremely simple HTML template (yes, I was too lazy even to indent it properly):

<html>
<head>
<title>Test Report</title>
<head>
<body>
<h3>Test results</h3>
<table>
<tr>
<th width="10%">Step</th>
<th width="10%">Result</th>
<th width="80%">Remarks</th>
</tr>
<!-- INSERT_RESULTS -->
</table>
</body>

In this template I am going to insert my test results, using a simple replace function

public static void writeResults() {
	try {
		String reportIn = new String(Files.readAllBytes(Paths.get(templatePath)));
		for (int i = 0; i < details.size();i++) {
			reportIn = reportIn.replaceFirst(resultPlaceholder,"<tr><td>" + Integer.toString(i+1) + "</td><td>" + details.get(i).getResult() + "</td><td>" + details.get(i).getResultText() + "</td></tr>" + resultPlaceholder);
		}
			
		String currentDate = new SimpleDateFormat("dd-MM-yyyy").format(new Date());
		String reportPath = "Z:\\Documents\\Bas\\blog\\files\\report_" + currentDate + ".html";
		Files.write(Paths.get(reportPath),reportIn.getBytes(),StandardOpenOption.CREATE);
			
	} catch (Exception e) {
		System.out.println("Error when writing report file:\n" + e.toString());
	}
}

Finally, we need to use these custom reporting functions in our test script:

public static void main (String args[]) {
		
	WebDriver driver = new HtmlUnitDriver();
	Reporter.initialize();
	driver.get("http://www.ontestautomation.com/files/report_test.html");
		
	for (int i = 1; i <=5; i++) {
		WebElement el = driver.findElement(By.id("textfield" + Integer.toString(i)));
		Reporter.report(el.getAttribute("value"), "Text field " + Integer.toString(i));
	}
		
	Reporter.writeResults();
	driver.close();	
}

The initialize() method simply clears all previous test results by emptying the List we use to store our test results. When we run our test, the following HTML report is generated:

The HTML test report

Here, we can clearly see that our test results are now available in a nicely readable (though not yet very pretty) format. In one of my next posts, I am going to enhance these HTML reporting functions with some additional features, such as:

  • Screenshots in case of errors
  • Use of stylesheets for eye candy
  • Test execution statistics

Hopefully the above will get you started creating nicely readable HTML reports for your Selenium tests!

The Eclipse project used in the above example can be downloaded here. The HTML report template can be downloaded here (right click, save as).

How to compare WSDL versions in your automated test framework

A colleague of mine posed an interesting question last week. He had a test setup with three different machines on which his application under test was installed and deployed. He wanted to make sure in his test that the web service interface offered by these deployments was exactly the same by comparing the WSDLs associated with each installation. However, the tool he used (Parasoft SOAtest) only supports regression testing of single WSDL instances, i.e., it can validate whether a certain WSDL has been changed over time, but it cannot compare two or more different WSDL instances.

Luckily, SOAtest supports extension of its functionality using scripting, and I found a nice Java API that would do exactly what he asked. In this post, I’ll show you how this is done in Java. In SOAtest, I did it with a Jython script that imported and used the appropriate Java classes, but apart from syntax the solution is the same.

The Java API I used can be found here. The piece of code that executes the comparison is very straightforward:

private static void compareWSDL(){
		
	// configure the log4j logger
	BasicConfigurator.configure();
		
	// create a new wsdl parser
	WSDLParser parser = new WSDLParser();

	// parse both wsdl documents
	Definitions wsdl1 = parser.parse("Z:\\Documents\\Bas\\wsdlcompare\\ParaBank.wsdl");
	Definitions wsdl2 = parser.parse("Z:\\Documents\\Bas\\wsdlcompare\\ParaBank2.wsdl");

	// compare the wsdl documents
	WsdlDiffGenerator diffGen = new WsdlDiffGenerator(wsdl1, wsdl2);
	List<Difference> lst = diffGen.compare();
		
	// write the differences to the console
	for (Difference diff : lst) {
		System.out.println(diff.dump());
	}
}

For this example, I used two locally stored copies of a WSDL document where I changed the definition of a single element (I removed the minOccurs=”0″ attribute). The API uses Log4J as the logging engine, so we need to initialize that in our code and add a log4j.properties file to our project:
Log4J properties file
When we run our code, we can see that the WSDL documents are compared successfully, and that the difference I injected by hand is detected nicely by the WSDL compare tool:
Console output for the WSDL comparison
A nice and clean answer to yet another automated testing question, just as it should be.

An example Eclipse project using the pattern described above can be downloaded here.

An introduction to service virtualization

One of the concepts that is rapidly gaining popularity within the testing world – and in IT in general – is service virtualization (SV). This post provides a quick introduction to the SV concept and the way it leverages automated and manual testing efforts and thereby software development in general.

What is service virtualization?
SV is the concept of simulating the behaviour of an application or resource that is required in order to perform certain types of tests, in case this resource is not readily available or the availability or use of this resource is too expensive. In other words, SV is aimed at the removal of traditional dependencies in software development when it comes to the availability of systems and environments. In this way, SV is complementary to other forms of virtualization, such as hardware (VPS) or operating system (VMware and similar solutions) virtualization.

Behaviour simulation is carried out using virtual assets, which are pieces of software that mimic application behaviour in some way. Although SV started out with the simulation of web service behaviour, modern SV tools can simulate all kinds of communication that is performed over common message protocols. In this way, SV can be used to simulate database transactions, mainframe interaction, etc.

http://www.w3.org/WAI/intro/people-use-web/Overview.html

From a ‘live’ test environment to a virtual test environment

What are the benefits of using service virtualization?
As mentioned in the first paragraph of this post, SV can significantly speed up the development process in case required resources:

  • are not available during (part of) the test cycle, thereby delaying tests or negatively influencing test coverage;
  • are too expensive to keep alive (e.g., test environments need to be maintained or rented continuously even though access is required only a couple of times per year);
  • cannot readily emulate the behaviour required for certain types of test cases;
  • are shared throughout different development teams, negatively influencing resource availability.

Service virtualization tools
Currently, four commercial service virtualization tools are available on the market:

Furthermore, several open source service virtualization projects have emerged in recent years, such as WireMock and Betamax. These offer significantly less features, obviously, but they just might fit your project requirements nonetheless, making them worthy of an evaluation.

Personally, I have extensive experience using Parasoft Virtualize and have been able to successfully implement it in a number of projects for our customers. Results have been excellent, as can be seen in the following case study.

A case study
The central order management application at one of our customers relied heavily on an external resource for the successful provisioning of orders. This external resource requires manual configuration for each order that is created, resulting in the test team having to file requests for configuration changes for each test case. The delay for this configuration could be as much as a week, resulting in severe delays in testing and possible test coverage (as testers could only create a small amount of orders per test cycle).

Using service virtualization to simulate the behaviour of this external resource, this dependency has been removed altogether. The virtual asset implementing the external resource behaviour processes new orders in a matter of minutes, as opposed to weeks in case the ‘real’ external resource is required. Using SV, testers can provision orders much faster and are able to achieve much higher test coverage as a result. This has resulted in a significant increase in software quality. Also, SV has been a key factor in the switch to an Agile development process. Without SV, the short development and testing cycles associated with the Agile software development method would not have been possible.

Service virtualization on Wikipedia