The CEO of Locai Labs, Britain’s answer to ChatGPT, James Drayson, says no tech company can guarantee their AI won’t produce explicit images – and accuses Silicon Valley rivals of pretending the problem doesn’t exist. Drayson is the son of former science minister Lord Drayson.
He is appearing before MPs and the Human Rights and the Regulation of AI committee on Wednesday as part of their probe into AI regulation and human rights risks.
Grok’s new image-editing feature allowed men to manipulate images of women and children - including everyday people and public figures - to remove their clothes and put them in sexualised positions, as well as to create unconsensual images of women being shot and killed.
And last year in the US, 14-year-old Sewell Setzer III took his own life after allegedly being manipulated by an AI chatbot.
Parliament’s Human Rights Committee is probing the risks and benefits of AI, how it might impact privacy, and discrimination, and whether current UK laws and policies are sufficient or if new legislation is needed to hold AI companies and developers accountable.
UK's AI rival outperforming US tech giants
Launched last year by founders James and George Drayson, Locai is the UK’s first rival to ChatGPT, and according to the company, is already outperforming US tech giants such as Claude, DeepSeek, and Gemini on key measures.
And with the Grok scandal splashed across headlines and parents worried about what AI might expose their children to next, Drayson says Locai Labs is taking a stand.
Unlike Silicon Valley rivals, Locai refuses to roll out image generation until it’s truly safe. It has also banned under-18s from accessing its AI chatbot, and is calling for radical transparency across the industry.
Drayson urges action and challenges the government to back British innovation. He asserts that industry needs to wake up.
"It’s impossible for any AI company to promise their model can’t be tricked into creating harmful content, including explicit images. These systems are clever, but they’re not foolproof. The public deserves honesty.
We’re the only AI company openly working to fix these problems, not pretending they don’t exist. If there’s a risk, we’ll say so – and we’ll show our work."
According to Drayson, the UK is relying on foreign AI "that doesn’t share our values".
"We need our own models, built for Britain, with British laws and ethics at their core. That’s how we protect our rights and our kids.
“We believe the UK can lead the world in responsible, values-driven AI, if we choose to. That means tough regulation, open debate, and a commitment to transparency. “AI is here to stay.
The challenge is to make it as safe, fair, and trustworthy as possible, so that its rewards far outweigh its risks.”
Would you like to write the first comment?
Login to post comments