As manipulated media becomes cheaper, faster, and harder to detect, the cost is increasingly borne by victims — not platforms.
Julia Jakimenko is the founder and CEO of Cyberette.ai, a Dutch startup founded in 2024 that builds AI software to detect, analyse, and explain manipulated digital content — including deepfakes, voice cloning, altered images, videos, and text — with a specific focus on fraud and investigation use cases.
I spoke to Jakimenko while the team was preparing for this week's CES to learn more.
Prior to Cyberette, Jakimenko worked in data security and compliance in banking. Following trends, she saw the emergence of AI-embedded tools, such as face-swapping and image manipulation. But it became personal when a friend’s face and body were used to create images that were put on dating websites to scam men for money.
She recalls, “She felt horrible and even sent money to one of the victims because she felt responsible. She was not the only one. I saw this happening repeatedly, especially to women.”
More than 80 per cent of deepfake explicit images target women, and most remain unresolved, especially sextortion cases. Just this week, Grok has been called out for its use by men to manipulate images of women to remove their clothes and put them in sexualised positions, as well as to create unconsensual images of women being shot and killed. With deepfakes showing no sign of abating, Jakimenko was inspired to act.
Her work gave her an understanding of security workflows and access to technical talent. So Jakimenko built an initial prototype with a former colleague from VU Bank.
“We exhibited it at Web Summit, received strong interest, and afterwards started building based on the leads we received. We later received funding from Rabobank and a grant from Microsoft, which allowed us to continue developing the product.”
From there, the company has built a team of AI researchers, data scientists, and security experts, and forged partnerships with leading technical universities.
Cyberette enables real-time detection of manipulated media, backed by media forensics insights and content authentication using C2PA standards and watermarking, supporting governments, media organisations, and enterprises.
The startup aims to support investigative teams dealing with deep fakes by providing explainability, provenance, and structured evidence they can actually use.
Why do we need another deepfake detection tool? Beyond ‘real or fake’
According to Jakimenko, most existing tools focus on a real-or-fake score with basic explainability, such as highlighting facial artefacts. Cyberette focuses on fraud detection rather than generic deepfake detection. She explains:
"We analyse how content was altered, why it was altered, and the surrounding context.
We provide provenance information — such as manipulation patterns, likely models used, approximate dates, and sometimes IP-level indicators if available.”
Explainability as evidence
Cyberette’s detection approaches combine multiple techniques to spot manipulated or synthetic media with speed and accuracy. Cyberette uses landmark-based detection to identify inconsistencies in facial geometry, pose, and motion, alongside heatmap-based analysis that highlights altered areas through anomaly scoring.
Sentiment analysis adds another layer by flagging unusual emotional cues such as shifts in tone or hesitation, while real-time detection delivers results in under two seconds for live scenarios.
Additional capabilities include watermarking and metadata analysis, as well as broader media forensics and threat intelligence to support deeper investigations.
Built for real-time, high-stakes workflows
The platform is designed specifically for investigative and monitoring workflows, using in-house AI models built from scratch and optimised for precise, real-time detection tasks.
“For investigation use cases, we also explain intent when relevant, for example by analysing inconsistencies between voice, visual signals, and contextual meaning,” explained Jakimenko.
Accuracy, latency, and architectural edge
Cyberette’s tech advantage lies in its ability to deliver 99.7 per cent accuracy across tested datasets while providing real-time, low-latency results at scale.
According to Jarimenko:
“This is the result of our strong backend architecture and lightweight models. Our APIs are designed for real-time detection.
Further, Cyberette can run in the cloud or fully on-premise.
Many of our customers have strong local infrastructure and GPUs, which allows the system to run efficiently without relying on our cloud."
Built for real-time investigation workflows
Cyberette’s primary customers are investigation and monitoring teams, including defence threat-monitoring platforms, public sector organisations, and private-sector fraud and investigation units.
In the public sector, it strengthens critical communications through live verification, biometric checks, and behavioural analysis for defence, intelligence, and law enforcement.
For enterprises, the platform helps prevent fraud by analysing behaviour, integrating with existing security systems, and operating at scale for banks and financial institutions. It also protects licensed content through metadata checks, intelligent watermarking, cross-platform monitoring, and C2PA verification for creators, brands, and talent agencies.
Further, Cyberette integrates with video conferencing tools (Teams, Zoom, Google Meet) to ensure secure video conferencing with participant verification and instant manipulation detection, secure identity verification at high volume via biometric analysis and SDKs, and enhanced e-learning platforms with easy-to-use detection tools, practical learning modules, and seamless integration.
The platform is engineered to support millions of users and billions of files without performance trade-offs, running efficiently on both GPU and CPU to keep costs low and accessibility high. Built for global deployment, Cyberette is compliance-ready by design, meeting stringent requirements across GDPR, ISO, and PII standards.
“We integrate C2PA provenance and authentication tooling, which is supported by organisations like Microsoft and Adobe, and increasingly trusted across the industry. Provenance frameworks are becoming essential as misinformation increases,” shared Jarimenko.
In terms of the company roadmap, the company sits between pilot and full commercial rollout:
“We already have paid customers, but we intentionally selected specific customers and industries where the problem is most acute."
Some cases involve small financial losses, but others involve lives — such as kidnapping threats, cyberbullying, sextortion, and abuse cases:
“We worked with a forensic team in Colombia and with a government defence organisation in Singapore. We have also had interest from the German and Dutch governments. Last year, the Dutch government said it did not yet see enough cases. I expect that to change.”
Deepfakes are getting better — and platforms aren’t stopping them
Jarimenko sees no sign of deepfakes abating, instead believing that they will increase and improve in quality:
“We are moving toward a situation where AI-generated outputs are pushed by browsers and platforms as trusted information. This creates confusion and a lack of trust.
There is currently little incentive for platforms to stop this. That’s why we focus on teams already dealing with fraud and crime.”
Cyberette is going to market in the coming months while raising a Seed round. It plans to expand into behavioural and sentiment analysis where relevant for investigative contexts, and later to explore earlier stages of the manipulation lifecycle. But for now, it is focused on high-impact, high-risk cases.
Would you like to write the first comment?
Login to post comments