Cyberette

Cyberette is an Amsterdam-based cybersecurity startup founded in 2024 that specialises in forensic-grade deepfake detection and digital identity protection. Founded by Julia Jakimenko, the company builds AI-native software to detect, analyse, and explain manipulated digital content across video, audio, images, and text, with a particular focus on fraud and investigation use cases.

Cyberette’s mission is to restore digital trust by making media verification as routine as two-factor authentication. In January 2026, the startup was selected as one of 45 top startups to represent the Netherlands at CES Las Vegas for the second consecutive year, following successful pilots with international forensic and defence organisations. The platform reports 99.7% accuracy in detecting AI-altered content whilst delivering results in under two seconds, making it suitable for real-time scenarios like secure video conferencing.

The Personal Story: When Deepfakes Became Personal

The inspiration for Cyberette emerged from a devastating experience that transformed founder Julia Jakimenko’s career trajectory. A close friend became a victim of deepfake abuse when her face and body were used to create images placed on dating websites to scam men for money.

“She felt horrible and even sent money to one of the victims because she felt responsible,” Jakimenko recalls. “She was not the only one. I saw this happening repeatedly, especially to women.”

The statistics reveal the scale of the problem. More than 80% of deepfake explicit images target women, and most cases remain unresolved, especially sextortion incidents. Just recently, Grok AI was called out for enabling men to manipulate images of women to remove their clothes, place them in sexualised positions, and create non-consensual images of violence against women.

Before founding the company, Jakimenko worked in data security and compliance in banking, where she witnessed the emergence of AI-embedded tools including face-swapping and image manipulation. Her background gave her understanding of security workflows and access to technical talent needed to build solutions. When the personal experience with her friend’s victimisation occurred, Jakimenko decided to act.

She built an initial prototype with a former colleague from VU Bank and exhibited it at Web Summit, where the strong interest validated market demand. “We later received funding from Rabobank and a grant from Microsoft, which allowed us to continue developing the product,” Jakimenko explains.

Revolutionary Technology: Multi-Modal Forensic Detection

Beyond Simple Real-or-Fake Scores

Most existing deepfake detection tools provide a basic real-or-fake score with rudimentary explainability, such as highlighting facial artefacts or noting audio inconsistencies. The Amsterdam-based startup takes a fundamentally different approach focused on fraud detection rather than generic deepfake identification.

“We analyse how content was altered, why it was altered, and the surrounding context,” Jakimenko explains. “We provide provenance information such as manipulation patterns, likely models used, approximate dates, and sometimes IP-level indicators if available.”

This forensic-grade analysis proves essential for legal proceedings, investigations, and corporate security teams who need detailed evidence beyond simple detection. Courts require understanding of manipulation techniques. Fraud investigators need to identify patterns connecting related attacks. Security teams must assess sophistication levels to understand threat actors.

Multi-Modal Detection Across Content Types

The platform scans video, audio, images, and text for signs of AI alteration with reported 99.7% accuracy. This multi-modal approach addresses the reality that modern deepfakes often combine multiple techniques: synthetic video with cloned voice, manipulated images with fabricated text, or altered audio overlaid on genuine video.

Video Analysis: The system identifies inconsistencies in facial geometry, pose, and motion using landmark-based detection. Heatmap-based analysis highlights altered areas through anomaly scoring, showing exactly which portions of frames exhibit manipulation signatures. The technology detects subtle artefacts including unnatural eye movements, inconsistent lighting across facial features, temporal inconsistencies between frames, and physiological impossibilities in expressions or movements.

Audio Detection: Voice cloning has become frighteningly convincing, enabling scammers to impersonate executives, family members, or authority figures. The platform analyses spectral patterns, prosody, breathing patterns, and micro-pauses that distinguish synthetic from genuine human speech. It identifies telltale signs including unnatural pitch variations, artificial background noise patterns, and temporal inconsistencies in speech rhythm.

Image Verification: Static images prove easier to manipulate than video but equally damaging in fraud and disinformation contexts. The system detects inconsistencies in lighting, shadows, reflections, and pixel-level artefacts indicating manipulation. It identifies signs of face-swapping, body modification, background replacement, and object insertion or removal.

Text Analysis: AI-generated text accompanies many sophisticated fraud schemes. The platform analyses linguistic patterns, consistency with known writing styles, and contextual anomalies suggesting synthetic generation. This capability proves particularly valuable in business email compromise attacks where scammers impersonate executives through text communications.

Real-Time Processing for Live Scenarios

Perhaps most impressively, the platform delivers detection results in under two seconds, making it suitable for live applications including secure video conferencing, real-time verification of video calls, live broadcast monitoring, and on-the-fly content moderation.

This speed proves essential for emerging use cases where decisions must be made immediately. A CEO receiving a video call from someone claiming to be their CFO requesting an urgent wire transfer needs instant verification. A journalist receiving breaking news footage needs rapid authentication before publication. A financial services firm conducting remote identity verification for high-value transactions requires real-time confidence.

The technical achievement enabling two-second detection involves sophisticated optimisation of AI models, parallel processing architectures, edge computing capabilities, and intelligent pre-filtering reducing unnecessary analysis.

C2PA Standards Integration

Cyberette actively contributes to the Coalition for Content Provenance and Authenticity (C2PA), an open technical standard providing publishers, creators, and consumers with the ability to trace the origin of different types of media. This standards work positions the startup as not just building detection technology but helping establish the infrastructure for digital trust.

C2PA enables content to carry tamper-evident provenance metadata describing its origin, how it was created, who created it, and any modifications applied. The company’s tools can verify C2PA credentials whilst detecting manipulation attempts even in content lacking provenance data, providing comprehensive coverage across authenticated and unauthenticated media.

Building Cyberette: From Prototype to Commercial Launch

Early Development and Funding

After the successful Web Summit demonstration in late 2023, the company secured pre-seed funding from Rabobank, one of the Netherlands’ largest banks with significant interest in fraud prevention technology. This corporate backing provided not just capital but access to real-world fraud cases, security team expertise, and potential deployment channels.

Microsoft awarded a grant supporting technical development, particularly AI model training and cloud infrastructure. Microsoft’s involvement reflects the tech giant’s broader commitment to responsible AI and combating deepfake misuse across its platforms.

The early funding enabled building a team of AI researchers, data scientists, and security specialists whilst forging partnerships with leading technical universities. Academic collaborations provided access to research talent, datasets for training detection models, and credibility with government and enterprise customers evaluating new technologies.

Team Building and Culture

Jakimenko assembled a multidisciplinary team combining AI/ML engineering, cybersecurity operations, digital forensics, UI/UX design for investigation workflows, and partnerships and standards development. The relatively small team reflects the startup phase but includes impressive depth given the technical challenges.

The company culture emphasises not just technological innovation but ethical AI development, user privacy and data protection, transparency in detection methodologies, and commitment to inclusive security benefiting all internet users.

Jakimenko’s human rights law background influences company values. She remains outspoken about risks of data scraping, shortcuts, and misuse. She believes AI needs global standards and companies building it should be held accountable, positions that resonate with customers seeking trustworthy partners.

CES Recognition and International Expansion

Selection to represent the Netherlands at CES 2026 marks significant validation. CES remains the world’s most influential technology event, and the Netherlands pavilion in Eureka Park showcases the country’s most promising startups. Appearing for the second consecutive year demonstrates sustained momentum and international interest.

At CES 2025, Jakimenko discussed the platform’s capabilities in numerous media interviews, raising the company’s profile amongst potential customers, partners, and investors. The 2026 appearance following commercial launch should enable announcing significant customer wins and partnerships developed during 2025.

Q1 2026 Commercial Launch: From Pilots to Production

Following successful pilots with international forensic and defence organisations throughout 2025, the company announced full commercial rollout in early 2026. This transition from pilots to production marks critical maturation from promising technology to proven product.

The pilots provided invaluable feedback about workflow integration, performance requirements, documentation needs, and feature priorities. Government and defence pilots particularly tested the platform’s robustness, as these organisations maintain exacting standards and cannot tolerate false positives or negatives in critical security contexts.

The commercial launch includes multiple deployment models: cloud-based SaaS for organisations comfortable with external processing, on-premises installation for customers with data sovereignty requirements, API integration enabling detection within existing workflows, and white-label options for partners embedding detection in their products.

Julia Jakimenko: Founder Redefining Success

From Human Rights Law to Cybersecurity Innovation

Julia Jakimenko’s path to founding a cybersecurity startup proved unconventional. She studied human rights law at the University of York, driven by desires to address systemic injustices and protect vulnerable populations. This background might seem distant from AI and cybersecurity, but Jakimenko sees direct connections.

“Deepfake abuse is a human rights issue,” she explains. “When someone’s identity is weaponised against them, when women are systematically targeted with non-consensual sexual images, when vulnerable people are exploited through manipulated media, we’re talking about dignity, autonomy, and safety. These are fundamental rights.”

Her transition into technology came through roles in data security and compliance in banking, where she developed technical understanding whilst maintaining focus on protecting people from harm. When the personal experience with her friend’s victimisation occurred, Jakimenko possessed both moral clarity about the problem and technical capability to build solutions.

Redefining Success on Her Own Terms

In podcast appearances and interviews, Jakimenko discusses her evolution regarding definitions of success. Early in her career, she internalised external measures: titles, salaries, recognition from prestigious institutions. The pressure to conform to portrayed images of success created stress and dissatisfaction.

“Now for me, success is something more holistic, something that comes from inside,” Jakimenko reflects. “I cancelled all those external values. I transitioned into peace and happiness that doesn’t rely on that portrayed image of what success is like.”

This personal philosophy influences how she builds the company. Rather than optimising purely for growth metrics or exit valuations, Jakimenko prioritises sustainable business building, ethical technology development, team wellbeing and inclusive culture, and genuine impact on digital trust.

Visibility and Role Models for Women in Tech

Jakimenko frequently addresses challenges women face in technology sectors, where they’re often judged more on tone and appearance than competence. “I think we really need more role models,” she emphasises. “If you don’t have those role models growing up, how can you know that someone that looks like you can be successful?”

Her visibility through conference appearances, podcast interviews, and media coverage serves dual purposes: raising the company’s profile whilst providing representation for women considering technology entrepreneurship. Jakimenko doesn’t view this as separate from business building but integral to creating the diverse, innovative ecosystem technology needs.

Practical Guidance: How to Spot Deepfakes

During her podcast appearance on Women Disrupting Tech, Jakimenko shared practical red flags ordinary people can watch for to avoid deepfake scams:

Pressure to Move Quickly: The biggest red flag. If someone asks for fast responses, especially over phone calls, pause. Scammers move victims from email to WhatsApp or calls because voice cloning proves easier over phone.

Inconsistent Visual Details: Watch the eyes carefully. Check whether lighting on the person’s face matches the background. If something feels off, trust that instinct.

Audio-Visual Mismatches: If someone appears to be in an office but you hear beach sounds, that’s suspicious. These contextual inconsistencies often betray synthetic content.

Zero Trust Approach: Jakimenko advocates “zero trust” thinking, which doesn’t mean paranoia but paying attention. Verify unexpected requests through alternative communication channels. Call back using known phone numbers rather than numbers provided in suspicious messages.

Also Read: BrightHeart raises €11M to scale AI-powered prenatal ultrasound across the US and Europe

By Ujwal Krishnan

Ujwal Krishnan is an AI and SEO specialist dedicated to helping UK businesses navigate and strategize within the ever-evolving AI landscape. With a Master's degree in Digital Marketing from Northumbria University, a degree in Political Science, and a diploma in Mass Communication, Ujwal brings a unique interdisciplinary perspective to the intersection of technology, business, and communication. He is a keen researcher and avid reader on deep tech, AI, and related innovations across Europe, informed by their valuable experience working with leading deep tech venture capital firms in the region.