On my new 'Valuable feedback, fast' course

This post was published on January 7, 2026

A couple of weeks ago, I published the training page for a brand-new course that I’m looking to run at least a couple of times in 2026. The course is called ‘Valuable feedback, fast’, and in this blog post, I’d like to share a little more about why I created this course, why it is designed in the way I have in mind and what it will look like.

Why did I create this course in the first place?

I’ve been running workshops and training courses for close to a decade now, and I really, really enjoy it. There’s just something very rewarding in sharing your knowledge and experience and helping others learn something new. Until now, the most important part of most of the courses I have been offering so far has been the ‘hands-on’ part: learning how to use a specific tool, or how to start with implementing a new technique or approach.

Next to the how, I always try and teach participants in my course what to do and what not to do with that tool or technique, and why they should or should not use a tool or technique in a specific context or situation. In other words: I try to go beyond ‘teaching people tricks with tools’, and I think I’m doing a pretty good job in that regard. You should really ask the participants themselves if they’d agree, though.

I plan to keep offering this type of workshop or training course in the future. It’s useful, fun, there’s demand for it, and it pays the bills, so I don’t see a reason to stop. However, I think there’s a need for a different kind of course in the area of test automation, too.

A course that does not focus on a specific tool, technique or practice, but one that focuses on ‘the bigger picture’ of test automation. One that teaches teams and organizations how to be successful with test automation. How test automation fits into their organization, their tech stack and their way of working. What common pitfalls one encounters on the road to test automation success, and how to navigate those pitfalls. In other words, a course that teaches them how to reach the test automation goal of providing valuable feedback, fast.

I’ve been doing some market research, and I could not find a lot of courses that cover all of this out there, even when there are a few that cover parts of it. I think this is a real gap in what there is on offer, because if we want to be successful with test automation, we need to be able to talk about it and work on it with that bigger picture clearly defined for our specific context.

I’m saying this because I’ve seen it happen so often in the nearly 20 years I’ve been working in test automation. When teams struggle with their test automation, they look at the tool or the technique they’re using and come to the conclusion that those are the root cause of their struggles. Often, though, the tool or the technique is only a (small) part of the problem: the real problem is lack of a solid, holistic test automation strategy that is created for their specific context and purpose.

This is exactly what the ‘Valuable feedback, fast’ course is all about.

What’s the difference between this course and other courses ‘out there’?

Apart from the fact that the focus of this course is not on a single tool or technique, but on creating a holistic test automation strategy, there is another difference that I think make this course stand out from other courses out there.

That difference is the context I present in the course, and within which the exercises are set. Most courses I run, and most courses I have attended, have squeaky clean contexts. The exercises are typically relatively small and simple (that doesn’t mean they’re easy, by the way), with the goal to achieve a single, very well-defined objective. Write a test that does this. Make this test code pass. Refactor that method.

Real life, though, is often anything but small and simple. There’s often a large, messy context in which you’re trying to do your work in the best possible way, and you have to navigate different kinds of impediments, work with existing applications and code bases that definitely do not look like what you’ve seen in the books, deal with conflicts of interest, work with different kinds of stakeholders, and so on.

In the ‘Valuable feedback, fast’ course, the context is messy, and that is by design. While it is impossible to cover everything that can happen in real life, the course attempts to recreate several challenges I have seen and have had to deal with in the past, and that I hear others talk about and struggle with, too.

The objective of doing this is to present a context that looks more like what you’ll have to deal with back at work after the course. Doing this addresses one of the most important pain points that I hear from participants in my workshops: they struggle to apply what they have learned in the course in their context, because their context is a lot messier and more complex than what they have seen during their time in the classroom.

What does an exercise in the course look like?

To illustrate what I mean when I talk about a messy context, let’s look at an example of what a typical exercise in the course looks like. One of the topics we will discuss in the course is that of ‘automating regression tests’, which still considered a ‘holy grail’ in many organizations. I wrote down my thoughts on regression test automation a while ago, but it is a topic that keeps coming back, and that’s understandable. To some extent, at least.

Before I talk about the specific exercise, it’s good to know that the entire ‘Valuable feedback, fast’ course revolves around a single context, representing an online bank struggling with delivering valuable software to their customers at the pace that these customers require. One of the reasons that their delivery process is lagging is the fact that the teams spend a lot of time on regression testing, i.e., on verifying that core software functionality still works, for every new feature, improvement and bug fix.

The exercise I’ll present in the course is to formulate an approach for speeding up this process, given a combination of the following conditions and impediments:

  • Team leads strongly advocate for ‘automation all the regression testing’, which is motivated by demands from higher up to speed up the delivery process and shorten feedback loops
  • Developers are under near-constant pressure to deliver features, which means they claim they can’t spend a lot of time writing test automation code
  • Not every tester has the skills to contribute to writing test code that is easy to read, understand and maintain
  • The backend system involved is relatively old and wasn’t exactly written with testability and using modern test automation tools in mind

As you can see, participants will have their work cut out for them to come up with a feasible and acceptable approach. In a follow-up to this exercise, they will simulate and experiment where they actually implement (part of) their proposal, too, and determine if they made the right choice, or whether they will need to revise and adjust their strategy.

There’s still quite a bit of work to do in designing this exercise, and the other exercises in the course, but this should give you a good idea of what I have in mind: figure out a strategy to address a specific problem, implement it at a small scale, observe the outcome, report on it and learn from it.

Who’s the intended audience for this course?

As mentioned on the training page:

This course is for software development and testing practitioners, as well as tech and team leads, who want to learn how to be successful with test automation and how to achieve its goal of ‘valuable feedback, fast’

The ideal client for this course is a development team, or a small group of development teams (up to 20 people in total) from the same company, who are looking to dive deeper and get serious about their test automation strategy and learn how to deal with the real-life problems that tend to come up in the process. I’m looking for teams who want to move beyond learning a specific tool or technique (even when this course does include plenty of hands-on, technical work) and dive deeper into what decides whether test automation efforts succeed or fail.

That sounds pretty good!

Excellent! If you would like more information about the course, or if you want to have a conversation to see if we can bring this course to your company, let’s have a chat.

This blog post was written while listening to Lemon8 – The Inner Sanctuary Sessions CD2

"