You know when you talk to an interviewee about transcribing interviewees and they admit they use AI to summarise notes from a meeting in archaic English, it's going to be a fun interview.
Dave Colwell, is VP of AI & ML at Tricentis. He also admits that some people he talks to at meetings “are very boring people. They always want an interview, and I’m like: “What are you going to talk about? Same thing as the last call.”
"So once, before anyone else joined, I set the transcript to be written in Shakespearean English. Everyone’s lines came out like “thou so-and-so.” It was hilarious. Prompt injection at its finest.”
He also recommends doing the same in gangster rap.
But speaking of AI, Tricentis provides a continuous testing platform designed to help large organisations automate and accelerate software testing as part of their DevOps and CI/CD pipelines. Its main goal is to help companies deliver software faster, with fewer defects, by replacing slow, manual testing with AI-powered, model-based, and low-code automation tools.
From testing software to testing AI itself
This week Tricentis unveiled its vision for the future of AI-powered quality engineering at Tricentis Transform, its flagship global event in London, marking a defining moment in how enterprises will build, test and deliver software in the AI era.
This announcement introduces a unified AI workspace and agentic ecosystem that brings together Tricentis’ portfolio of AI agents, Model Context Protocol (MCP) servers and AI platform services, creating a centralised hub for managing quality at the speed and scale of modern innovation.
Testing at the speed (and chaos) of AI
As software creation accelerates through generative AI, organisations face an exponential rise in both code volume and complexity. Traditional testing models can no longer keep pace. Tricentis’ vision reframes quality engineering as a strategic discipline powered by intelligent, autonomous systems where agents work alongside skilled professionals to ensure every release is faster, safer and more reliable. “We give enterprises the ability to test everything."
According to Dave Colwell, VP of AI & ML at Tricentis, the company began with automated testing, performance testing, and test management — essentially making testing easier to create and maintain.
“Today, we work with companies that run on incredibly complex technology stacks. Some of their systems were built in the 1970s, others were rolled out yesterday.
If any piece of that stack fails, customers feel the pain immediately. That’s where Tricentis comes in: we give enterprises the ability to test everything, across old and new technologies alike.”
Colwell likes to joke that “I’m an ‘AI hipster’ — I joined before large language models were even on the scene. “
His background is in computer vision and natural language processing, and he recounts that early on, Tricentis used computer vision models to analyse user interfaces and figure out how to test them organically, rather than mechanically.
“That was our first foray into applying AI to testing.
Over the past eight years, we’ve invested heavily in AI, with the main goal of reducing the human effort needed to build and maintain tests. Ideally, the tests build and maintain themselves. That’s the future we’re working toward.
The AI testing paradox: when ‘wrong’ isn’t a bug
I was curious, what makes testing AI solutions so difficult?
According to the 2025 Tricentis Quality Transformation Report, nearly two-thirds (63 per cent) of organisations deploy code without fully testing it, and over 8 in 10 (81 per cent) report financial impacts from software defects exceeding $500k annually.
As AI accelerates development and delivery, the need for adaptive, autonomous testing becomes critical.
Tricentis’ agentic AI technologies address this challenge directly, enabling systems that not only generate and execute tests but learn continuously from outcomes to enhance reliability and reduce risk over time.
Colwell detailed that the biggest challenge in AI is that a “wrong” response isn’t necessarily a bug — it’s just another data point. With traditional software, you fix a bug and it won’t reappear if fixed correctly. With AI, you can’t guarantee that:
“Take a customer-support chatbot as an example,” shared Colwell.
“During testing, someone might ask a question and the bot answers incorrectly. In a traditional workflow, testers would raise a bug. Suddenly you have thousands of “bugs” — but they’re not fixable in the usual sense because AI is probabilistic. It’s a big ball of math making guesses.”
When not to use AI
The first filter Tricentis teaches customers is: should you use AI at all? According to Colwell, if your use case can’t tolerate persistent error, then AI isn’t the right tool.
“In drug discovery or hiring, even a single mistake can have catastrophic consequences. By contrast, AI-generated code is a safer use case because humans review it, pipelines catch errors, and the system is designed with verification in mind.”
AI turns startups into giants
Tricentis’ customer base is primarily large enterprises with sprawling, complicated tech stacks and the most to lose when things go wrong.
They’re also the ones most anxious about being disrupted by nimble AI startups. Colwell argues that many large enterprises are intimidated by three-person startups.
“AI makes it possible for a tiny team to look like a big company almost overnight. We’ve seen billion-dollar organisations losing customers to these newcomers because the speed and polish of what they deliver is suddenly competitive.”
The problem for enterprises is that they’re weighed down by technical debt and legacy systems. They’ve never seen customers churn so quickly to younger competitors. Even though most AI startups won’t survive long-term, the disruption they cause is real.
“Vibe coding”: good idea, terrible name
And, of course, I wanted to get Colwell’s stake on vibe coding. He laughs that while it's a terrible name, the concept is real:
“We’ve run AI coding programs internally, and we’ve seen two very different outcomes. One engineer handed almost everything to the AI. He looked highly productive — shipping massive amounts of code.
But when we reviewed it, much of the code was low-quality, because he had surrendered too much control.
On the other hand, teams that built processes around AI coding — with feedback loops, documentation, and review — saw far better results.
They made the AI explain its reasoning, documented acceptance criteria, and looped back to check whether the outputs matched the original plan. That produced reliable outcomes.”
So the lesson is this: AI coding is about changing how you work — focusing on process, documentation, and validation. Colwell asserts:
“Done this way, it’s powerful, but done badly, it’s a disaster.”
The “stolen generation” of developers
I’m always interested in what AI means for young developers entering the workforce, especially in many cities in Europe where there are high unemployment rates for early-career roles.
According to Colwell, right now, we have a delicate balance.
"Younger engineers adapt quickly to new tools, but they don’t always recognize what “good” code looks like. Experienced engineers know quality, but can struggle to adapt to the new paradigm. Pairing the two creates strong outcomes.”
He believes that looking further ahead, we’ll flip the traditional learning path:
“Today, junior developers spend years learning by fixing bugs and writing small features before they’re trusted with design.
In the future, people may learn design patterns and architecture first, because AI will handle much of the low-level coding.
However, Colwell also raised concern about what he calls a “stolen generation” — developers trained on coding skills that AI makes less relevant, but who haven’t learned higher-level design thinking.
“They’ll need to re-skill, which won’t be easy. At Tricentis, we’ve realised that defensibility no longer lies in code — AI makes code almost disposable.
The moat is data, delivery, and customer trust. Code can be reproduced quickly, but real-world data and trusted customer relationships cannot. That’s why enterprises are incredibly protective of their data and why we focus heavily on transparency.”
From ERP overhauls to agentic AI workflows
In terms of AI-first evolution, Tricentis has three main focuses:
Autonomous testing — essentially, letting users guide the process while AI handles the execution. “We want 'hands on the keyboard' testing to disappear.”
ERP replacement and validation: “Many enterprises are moving to cloud ERP systems while grappling with massive technical debt and vendor lock-in. We see a huge opportunity in helping them test and validate those transitions,” shared Colwell.
Agentic coding and validation — Colwell asserts that the gap in the market is not just in AI coding but in AI validation.
“You can’t let the same AI that writes code also test it, because it will inherit the same false assumptions. We’ve developed approaches using separate AIs — one to write code, another to test it — communicating through protocols like Model Context Protocol. That separation creates the same dynamic you get in human teams, where developers and testers think differently.”
Tricentis AI workspace offers an enterprise-grade environment for managing AI agents, workflows and governance across the entire software lifecycle.
Coming in 2026, this intelligent workspace allows organisations to:
- Onboard and orchestrate AI agents from Tricentis, partners or third parties;
- Define governance and security policies for responsible AI operations;
- Integrate directly into SDLC workflows using tools like Jira, GitHub and ServiceNow;
- Monitor agent performance and compliance through unified dashboards; and
- Scale quality engineering autonomously, empowering teams to manage agentic AI “workforces” while focusing on higher value initiatives.
The AI workspace unites Tricentis’ agentic portfolio, including Agentic Test Automation (Tosca), Quality Intelligence (SeaLights), Test Management (qTest) and Performance Engineering (NeoLoad), all connected through Model Context Protocol (MCP) servers that enable secure, flexible interoperability across AI systems and enterprise toolchains.
Would you like to write the first comment?
Login to post comments