Less, but better
This post was published on December 19, 2025There’s an expression referenced in Greg McKeown’s excellent book ‘Essentialism: the disciplined pursuit of less’ that I’ve been finding myself thinking about on a regular basis for the last couple of months, and that’s
“Weniger aber besser”
This German expression translates to ‘less, but better’ in English, and it describes the approach behind the designs of German industrial designer Dieter Rams. Now, I’m not an industrial designer, or a designer of any kind, but I think it’s an expression that we might do well to keep in mind a little more often when we’re talking about test automation and the approach that I see teams adopt towards it, too.
Why? Well, two reasons, really:
- There is a lot of talk about coverage, and about teams writing as many tests as they should in order to meet a certain coverage number. The type of coverage differs, ranging from statement coverage, to me the most simplistic type of coverage, to more useful flavours such as mutation coverage.
- There is even more talk (a lot more these days!) about how we can use AI-powered tools to drastically increase the speed with which we write our tests.
To put it differently, I see a lot of talk about doing more and doing things faster. What I’m missing in the conversation is how we can do things better. For me personally, talking about how we can do things better is infinitely more interesting than talking about how to do things faster or how to do more things in the same time. Only when we try and genuinely do things better than we did before do we make progress, after all…
Yes, I can hear you think, sometimes faster is better. And you’re not wrong, there. I frequently mention ‘valuable feedback, fast’ when I talk about the purpose of test automation, for example. Getting the right feedback faster, for example by automating repetitive tasks, or by speeding up tests that are inefficient is progress. It does make things a bit better, overall.
What often hasn’t changed, though, when we change the speed with which we get our feedback, is the quality of that feedback. We didn’t learn anything new just by retrieving the same information in a faster, more efficient way. We didn’t necessarily improve our product by getting feedback faster, even when we might have improved the process.
That, to me, feels like we’re leaving a lot of value on the table. Our product, be it the thing we put into the hands of our end user or the tests that we write to learn about the state of that product before we put it into the hands of our end user, doesn’t necessarily improve when we write and run our tests faster.
The same goes for ‘more’. ‘More’ does not automatically equal ‘better’ - it simply equals ‘more’. More feedback. More tests. More code. Like doing things faster, having more tests by itself does not automatically make for a better product.
Sometimes, having ‘more’ of something even has a negative effect on the product overall. Those low-value tests that were really only written to cover a part of your code that doesn’t really implement behaviour, but that need to be written to achieve a certain percentage of coverage… Do those really add value, or are they more like dead weight? Are they really worth the time it takes to execute them, to review their results and to maintain them to keep them in shape?
And that basically sums up what is bothering me about AI at the moment. Or rather, about the way we talk about AI, at least from what I read on LinkedIn and in the blogosphere, as well as what I hear in conference talks. All the talk about AI and its impact on test automation, software development and software testing seems to be heavily skewed towards ‘faster’ and ‘more’, to the detriment of talking about ‘better’.
Now, before you take this as a stab at AI itself, or think I’m a Luddite rejecting new technology: it isn’t, and I don’t think I am (well, not in this case anyway). AI itself isn’t the reason I’m having these thoughts, or why I’m writing this blog post. It’s how we use AI, and how we talk about it that is making me wonder if we’re not forgetting something with all the hype around what AI can do.
There’s already a lot of research being done on the negative side effects of AI (over-)use, with results that quite frankly worry me. I can’t help but wonder: is being blinded by the shine of ‘faster, faster, faster’ and ‘more, more, more’ and forgetting to think about ‘better’ another one of those side effects? I don’t have an answer to this, at least not right now, but it is something that has been occupying my thoughts for a while already.
So, in an attempt to try and bring a bit of balance to a world that seems to be obsessed with ‘faster’ and ‘more’, for 2026 I promise (to myself if to nobody else) that I’ll keep thinking about, talking about and asking questions about how we can make things genuinely better, too.
That probably means you won’t see me writing a lot about how to use AI to do task X faster, or how to generate more artifacts of type Y. I’ve never done that in the past, and I don’t think it is all that interesting a subject, either. Or at least, the novelty of it has worn off for me a while ago.
What I am interested in, what I have always been interested in and what I will probably remain interested in for a long time to come, is how we can use tools (and yes, that includes the AI-powered ones) to increase the quality of the work that we do and the products that we deliver. So, you’ll see me writing about that in 2026. Hopefully more often than I did in 2025. I’m happy to be an exception to the title of this blog post, and do better by writing more about doing better, if you catch my drift…
Note: this blog post was written while listening to Bill Hamel - Balance 003 CD 1
"