$200 million. That's how much money got stolen through deepfake fraud in just the first four months of 2025, according to a new report from Resemble AI. Not the whole year. Not even half the year. Four months.
And here's the thing that makes my stomach drop: that's only the reported cases. The actual number is almost certainly way higher, because most companies don't publicly admit when they get deepfake-scammed. Bad for the stock price, you know?
I've been following AI security issues for a while now, but this escalation is something else. We went from "wow, deepfakes are getting scary" to "deepfakes are actively stealing hundreds of millions of dollars" faster than anyone predicted.
How It Actually Happens
The stereotypical deepfake scam is some tech wizard spending weeks creating a perfect fake video of a CEO. But that's not what's happening anymore.
With just 15 seconds of audio, scammers can clone someone's voice using tools like ElevenLabs or Resemble AI (ironic, given they're the ones releasing this report). Then they just... call someone at a company, pretending to be the CFO or CEO, and ask them to wire money.
There was a case in February where a finance worker transferred $25 million after a deepfake video call that appeared to show their CFO and several colleagues. Video. Call. In real-time. Not even a pre-recorded thing they could analyze frame by frame. Just a regular-looking Zoom call with people who looked and sounded exactly like their coworkers.
The crazier part? The technology to do this is increasingly available to anyone with a laptop and an internet connection. DeepFaceLive, Magicam, Amigo AI—these aren't dark web hacking tools. They're just... apps. Some of them are even on GitHub.
The Numbers Are Getting Worse
Here's the trajectory that should terrify everyone:
- Deepfake incidents in all of 2024: 150
- Deepfake incidents in first half of 2025: 580
We're on track for a 10x increase year-over-year. And the sophistication is improving faster than the detection tools.
Even more concerning: 68% of people can't distinguish between real and fake video content. Two-thirds! That's not a "some people are gullible" problem. That's a "humans are fundamentally not equipped for this" problem.
I showed my girlfriend a deepfake video last month (for research, I swear), and she was 100% convinced it was real until I told her otherwise. She works in media! She literally edits video for a living! And she couldn't spot it.
Who's Getting Targeted
The breakdown is depressing:
- 41% of deepfake targets are public figures (politicians, celebrities)
- 34% are private citizens (just regular people)
- The rest are organizations
So basically, nobody's safe. You don't need to be famous or rich. You just need to have something a scammer wants, or know someone who does.
The most common uses are:
- Non-consensual explicit content (32%)
- Scams and fraud (23%)
- Political manipulation (14%)
- Disinformation (13%)
That first one is particularly grim. Nearly a third of deepfakes are being used to create fake intimate images, often for blackmail or harassment. Women disproportionately targeted, of course.
The CEO Impersonation Problem
Corporate deepfake scams have become so common that there's now an acronym for it: BEC (Business Email Compromise), except now it's also BCC (Business Call Compromise) and BVC (Business Video Compromise).
The playbook is pretty consistent:
- Identify a target employee with access to money or data
- Research their boss or CEO (often from LinkedIn and company videos)
- Clone the voice/appearance using available AI tools
- Make the call/video call with an urgent request
"We're doing a confidential acquisition, need to wire $500K immediately, don't tell anyone."
And here's what makes it work: urgency plus authority. The fake CEO creates time pressure ("need this done in the next hour") combined with rank ("I'm the CEO, just do it"). Those two together override most people's fraud detection instincts.
I talked to a friend in corporate security, and he said their biggest problem is that employees want to be helpful. They want to be the person who comes through when the CEO needs something. That instinct—which is usually a good thing—becomes a vulnerability.
The Voice Cloning Thing Is Wild
The voice cloning capability has gotten absurdly good. With 15 seconds of audio, AI can replicate your voice with 85% accuracy. That's good enough to fool your own family members.
There's this one case where a scammer called someone pretending to be their grandson, claiming he'd been in a car accident and needed bail money immediately. The voice was perfect. The emotional distress sounded authentic. The grandmother wired $15,000 before realizing her grandson was fine and had never left town.
That's not a technology problem. That's a "we're f*cked" problem. Because how do you defend against that? Tell your grandma to never help her grandkids? Build a family code word system like you're in a spy movie?
Real-Time Deepfakes Are the New Nightmare
Pre-recorded deepfakes were scary enough. But now we've got real-time deepfakes that can manipulate video during live calls.
These tools let scammers actively impersonate someone during a video call. They can change their face, voice, gender, race—whatever they need to match the person they're impersonating. And they can improvise in real-time, responding naturally to questions and conversations.
Traditional liveness detection (the thing that makes you turn your head or blink during identity verification) is becoming useless. The AI can fake those movements too.
From the research: 1 in every 20 identity verification failures are now linked to deepfakes. And that's just the ones being caught. How many are slipping through?
What Companies Are Trying to Do
The enterprise response has been a mix of "this is fine" and panicked scrambling.
Some preventive measures being rolled out:
- Multi-factor authentication using physical devices (harder to fake)
- Verification protocols for large transactions (callback on a known number)
- AI detection tools (which... use AI to detect AI, so that's fun)
- Employee training programs (teaching people to be suspicious of urgent requests)
But here's the problem: all of those add friction. And friction slows business down. So companies are trying to balance "don't get scammed" with "don't make it impossible to get work done."
One CISO I spoke with (not for attribution) basically said they're fighting a losing battle. For every detection method they implement, the deepfakes get better. It's an arms race, and they're not confident they're winning.
The Policy Response Is... Slow
Legally, we're way behind. 47 states have some deepfake legislation, but most of it focuses on political deepfakes or explicit content. The fraud stuff is still being figured out.
The EU AI Act has deepfake labeling requirements that went into effect in August 2025, but that only works if the deepfake creator complies with the labeling requirement. Which, spoiler alert, scammers don't.
The U.S. passed the TAKE IT DOWN Act in May, which criminalizes non-consensual intimate deepfakes. That's good! But it doesn't really address the fraud side.
We need something more comprehensive. We need:
- Mandatory watermarking for AI-generated content
- Liability for platforms that host deepfakes
- International cooperation (scammers work across borders)
- Faster takedown mechanisms
- Better detection standards
But getting all that through various government bodies while the technology is evolving this fast? Good luck.
What You Can Actually Do
The advice from security experts is depressingly basic but actually helpful:
For individuals:
- Be skeptical of urgent requests, even from people you trust
- Verify through a different channel (if someone calls, call them back on a known number)
- Establish code words with family members for emergency situations
- Be careful what you post online (every video/audio of you is potential training data)
- Use multi-factor authentication everywhere
For companies:
- Implement verification protocols for large transactions
- Train employees to recognize social engineering
- Limit what financial information is easily accessible
- Have clear escalation procedures for unusual requests
- Consider voice verification systems
But honestly? These feel like sandbags against a tsunami. They'll help, but they're not solving the fundamental problem.
The Uncomfortable Reality
We've built technology that makes it trivially easy to impersonate anyone with convincing accuracy, and we're distributing it widely before we've figured out how to detect or prevent abuse.
That's not a hypothetical future problem. That's right now. That's $200 million in four months.
And it's going to get worse. The technology is improving faster than our defenses. More people are getting access to these tools. The economic incentives for scammers are enormous—why rob a store for a few thousand when you can deepfake a CEO and steal millions?
The thing that keeps me up is this: we're seeing exponential growth in both capability and incidents. The first half of 2025 had 4x more deepfake incidents than all of 2024. If that trend continues, what does 2026 look like?
My Actual Fear
I don't think we're prepared for what happens when deepfakes become indistinguishable from reality for the average person. Not in some future AGI scenario—in like six months.
Because right now, the technology is good enough to fool most people, but it still requires some technical skill to deploy. What happens when it's so easy that literally anyone can create a convincing deepfake in five minutes?
We're already seeing teenagers use these tools to create fake explicit images of classmates. We're seeing political campaigns deepfake opponents saying things they never said. We're seeing scammers steal hundreds of millions.
And the tech is only getting better. And cheaper. And more accessible.
I don't have a good solution. I wish I did. But I think we need to have an honest conversation about the fact that "seeing is believing" is becoming an increasingly dangerous heuristic, and we don't have a good replacement for it yet.
In the meantime, I guess we all just need to get comfortable with trusting nothing and verifying everything, which sounds exhausting but also necessary.
Welcome to the future. It's expensive and paranoid and nobody's having a good time.