Step-by-step integration testing: a case study

For the last 9 months or so, I have been working as a tester on a project where we develop and deliver the supply chain suite connected to a brand new highly automated warehouse for a big Dutch retailer. As with so many modern software development projects, we have to deal with a lot of different applications and the information that is exchanged between them. For example, the process of ordering a single article on the website and then processing it all the way until the moment it is on your doorstep involves 10+ applications and multiple times that amount of XML messages.

Testing whether all these applications communicate correctly with one another is not simply a matter of placing an order and seeing what happens. It requires structured testing and a bottom-up approach, starting with the smallest level of integration and moving up until the complete set of applications involved is exercised. In this post, I will try and sketch how we have done this using three levels of integration testing.

First, here’s a highly simplified overview of what the application landscape looks like. On one side we have the supply chain suite and all other applications containing and managing necessary information. On the other side there’s the warehouse itself. I have been mostly involved in testing the former, but now that we’re into the final stages of the project, I am also involved (to an extent) in integration testing between both sides.

The application landscape

Level 1: message-level integration testing
The first level of integration testing that we perform is on the message level. On this level, we check whether message type XYZ can be sent successfully from application A to application B. Virtually all application integration is done using the Microsoft BizTalk platform. To create and perform these tests, we use BizUnit, a test tool specifically designed for testing BizTalk integrations. Every test follows the same procedure:

  1. Prepare the environment by cleaning relevant input and output queues and file locations
  2. Place the message type to be tested on the relevant BizTalk receive location (a queue or RESTful web service)
  3. Validate whether the message has been processed successfully by BizTalk and placed on the correct send location.
  4. Check whether the messages have been archived properly for auditing purposes
  5. Rinse and repeat for other message flows

Note that on this test level, no checks are performed on the contents of messages. The only checks that are performed concern message processing and routing. BizTalk does not do anything or even care for message contents, it only processes and routes messages based on XML header information. Therefore, it does not make sense to perform message content validations here.

Scope of the BizUnit message level tests

Level 2: business process-level integration
The second level of integration testing that is performed focuses on successful completion of different business processes, such as customer orders, customer returns, purchasing, etc. These business processes involve the exchange of information between multiple applications. As the warehouse management system is developed in parallel, that interface is simulated using a custom built simulation tool. On a side note: this is a form of service virtualization.

Tests on this level involve triggering the relevant business process and tracking the process instance as related messages pass through the applications involved. For example, a test on the customer order process is started by creating a new order and verifying amongst other things if the order:

  • can be successfully picked and shipped by the warehouse simulator,
  • is created, updated and closed correctly by the supply chain suite,
  • is administrated correctly in the order manager,
  • triggers the correct stock movements, and
  • successfully triggers the invoicing process

Scope of the business process tests

A number of tests on this level have been automated using FitNesse. For me, this was the first project where I had to use FitNesse, and while it does the job, I haven’t exactly fallen in love with it yet. Maybe I just don’t know enough about how to use it properly?

Level 3: integration with warehouse
The third and final level of integration testing was done on the interface between the supply chain suite and connecting applications and the actual warehouse itself. As both systems were developed in parallel, it took quite a bit of time before we finally were able to test this interface properly. And even though our warehouse simulation had been designed and implemented very carefully, and it certainly did a lot for us in speeding up the development process, the first integration tests showed that there is no substitute for the real thing. After lots of bug fixing and retesting, we were able to successfully complete this final level of integration testing.

Scope of the warehouse integration tests

For this final level of integration testing, we were not able to use automated tests due to the time required by the warehouse to physically pick and ship the created orders. It would not have made sense to build automated tests that have to wait for an hour or more before a response indicating the order has been shipped is returned from the warehouse. The test cases executed mostly follow the same steps as those in level 2 as they are also focused on executing business processes.

I hope this post has given you some ideas on how to break down the task of integration testing for a large and reasonably complex application landscape. Thinking in different integration levels has certainly helped me to determine which steps and which checks to include in a given test scenario.

Automated tests do not improve your testing process. Testers do.

‘Is manual testing going to go the way of the dinosaur?’

‘Are manual testers becoming obsolete?’

Just a few ways of phrasing concerns that have been raised a lot in the testing community in the past few months, possibly even years. With software delivery and test cycles becoming ever shorter, organizations are depending more and more on automated testing to deliver software that has at least an acceptable level of quality.

Testers not involved or even interested in automated testing (‘manual testers’, for lack of a better description) might start to worry whether their job is still secure. I would like to argue that it is.

The first and foremost reason is that automated tests do not improve the quality of a product, nor do they improve the quality of your testing process. Automated test tools are very useful for exactly one thing: quick and/or unattended execution of a number of predefined checks.

If those automated checks are of poor quality, you’ll get poor quality results. You’ll just get them faster. So while automated tests may speed up the testing process, they’ll never improve it.

However, testers do.

If your automated tests only check irrelevant things, your test tool won’t tell you.

Testers do.

Your automated test tool does not automatically generate tests for brand new software components (most of the time they don’t, at least).

Testers do.

Your automated tests do not discuss features with developers and business representatives to see whether the specifications are complete and unambiguous.

Testers do.

Your automated tests do not think ‘hey, what would happen if I do XYZ instead of ABC?’

Testers do.

I could go on for a while, but in short, there is still a lot of work to do for testers.

You won’t get away much longer with following the ‘old world’ process of waiting for specifications – writing test plans – writing test cases – executing test cases – reporting bugs – writing test reports. But you don’t have to fear that there is no place for you in the software development process.

And of course, you could always venture into test automation..

My article on service virtualization has been published on StickyMinds

On August 10th StickyMinds, a popular online community for software development professionals, published an article I wrote titled ‘Four ways to boost your testing process with service virtualization’. In this article, I describe a case study from a project I worked on recently, where service virtualization was used to significantly improve the testing process.

The article demonstrates (as you can probably guess from the title) how you too can employ service virtualization to remove some of the bottlenecks associated with testing and test data and test environment management for distributed systems and development processes.

You can read the article by visiting the StickyMinds homepage now. The article will be right there on the front page.


Please let me know what you think in the comments! Also, feel free to share the article with your connections on social media.

Again, the article can be read here.