Using AI responsibly in test automation workshop

As part of my day-to-day work, I talk to and work with teams that are actively experimenting with AI to support their test automation efforts. Increasingly often, these teams use LLMs such as Claude to help write or refactor test code, for example for tools like Playwright or REST Assured.

That interest is understandable. When used well, AI can be a powerful assistant. However, I also see teams fall into the same two traps when using AI to help write their tests:

  • They focus primarily on writing test code faster, rather than writing better tests
  • They outsource decisions to AI that require human context, judgment, or responsibility

In this workshop, we will address both these traps and explore how to use AI as a deliberate and effective assistant in test automation, rather than as an uncontrolled replacement for human effort.

Yes, I’d like to book this workshop for my team!

What will you learn?

In this workshop you will learn, among other things:

  • How and where AI can effectively support test automation, and where it should not be used
  • How to define clear guardrails and constraints when creating and working with AI-generated test code
  • Why and how to critically review and validate AI-generated tests, instead of accepting them at face value
  • Common pitfalls when using AI for test automation, including false confidence, overfitting to examples, and loss of intent
  • How to use AI to improve test design, readability and maintainability, not just typing speed

Why do we need these skills in a time when AI can just write our tests for us? That’s a great question.

First, accountability for testing and automation and for the information provided by these tests remains with the team, that is, with you. That means you will need the skills to assess whether AI-generated tests are meaningful and reliable.

Second, effective test automation requires understanding product behavior, risks, and trade-offs. These are contextual and often implicit concerns that cannot simply be delegated to an AI model.

In other words, AI is an incredible assistant in testing and automation, but not a direct replacement for humans.

This workshop uses Claude Code as the primary AI assistant, and typically uses Playwright as the example test automation tool. However, the concepts and exercises are tool-agnostic, and the workshop can also be run using other tools, such as Selenium, Cypress, or REST Assured.

Who should take this workshop?

This workshop is intended for testers and developers who are already working with test automation and are experimenting with, or planning to use, AI to support that work.

Some prior experience with test automation is required. Familiarity with Playwright, REST Assured, or another code-based test automation tool is beneficial, but not mandatory.

This workshop works well as a standalone session, or as a complement to existing workshops on test automation tools and practices.

Workshop duration and delivery

This is a one-day workshop (6–8 hours). It can be delivered as an on-site or online in-company workshop, or as a full-day conference tutorial.

I’m interested, what’s next?

I’m happy to hear that! Click the button below, complete the contact form, and I’ll get back to you as soon as possible.

Yes, I’d like to book this workshop for my team!

If you’d like to see the other training courses and workshops I have on offer, please click here.