Sharon Brightwell heard her daughter sobbing. The voice on the phone was unmistakable: April Monroe, hysterical, claiming she'd hit a pregnant woman while texting and driving. The baby was dead. Police had taken April's phone. A man identifying himself as April's attorney came on the line and said he needed $15,000 for bond, immediately.
Brightwell, a mother in Dover, Florida, handed the cash to a courier that same day. It wasn't until her grandson, suspicious of the whole thing, dialed April's actual number that the family learned the truth. April was at work. She hadn't been in any accident. The voice on the phone was a synthetic clone, generated from a handful of Facebook videos, stitched together by an algorithm that had learned to cry.
That was July 2025. Since then, the problem has gotten worse. Much worse.
One in Four Americans Already Got the Call
Hiya, the Seattle-based caller intelligence company that protects over 550 million users monthly, released its annual State of the Call report on March 2. The findings landed like a grenade in the telecom industry.
Twenty-five percent of Americans say they received an AI-generated deepfake voice call in the past twelve months. Another 24 percent admit they cannot tell the difference between a cloned voice and a real one. That means roughly half the country has either been targeted by synthetic voice fraud or would fail to recognize it if they were.
"When consumers tell us that scammers are beating mobile networks two-to-one, that has to be a wake-up call for the entire telecom industry," said Alex Algard, CEO and Founder of Hiya.
The numbers behind that statement are staggering. Americans now receive an average of 9.9 unwanted calls per week, more than 500 per year, a figure growing at a 16 percent compounded annual rate since 2023. Nearly half of respondents said phone spam is getting worse. And when asked who is doing a better job of staying ahead, consumers sided with the scammers over their own carriers by a margin of nearly two to one.
Hiya surveyed over 12,000 consumers across six countries for the report. The American data was the most alarming.
The Arms Race the Carriers Are Losing
The telecommunications industry has spent billions on call-authentication protocols like STIR/SHAKEN, designed to verify that the number appearing on your caller ID actually belongs to the person calling. Scammers adapted. They spoof numbers that pass verification, rotate through thousands of disposable lines, and increasingly skip the phone network entirely by attacking through messaging apps and VoIP services that sit outside carrier control.
The result: 38 percent of subscribers told Hiya they are likely to switch providers if they feel their carrier isn't protecting them from AI-powered scams. Sixty-seven percent of Americans believe carriers should bear some financial responsibility for scam losses that originate on their networks. And 55 percent want carriers to offer zero-liability fraud protection, the same guarantee credit card companies have provided for decades.
In non-U.S. markets surveyed by Hiya, the sentiment was even more intense: over 75 percent of respondents in the UK, France, Germany, Canada, and Spain demanded carrier financial liability for scam calls.
The carriers, for their part, point to the scale of the problem. Billions of calls cross their networks every day. Filtering out the fraudulent ones in real time, especially when the voice on the other end sounds perfectly human, requires a level of AI sophistication that most operators have not yet deployed.
A $1.1 Billion Problem That's Headed Toward $40 Billion
The financial toll has crossed a threshold that is hard to ignore. Deepfake-enabled fraud drained $1.1 billion from U.S. corporate accounts in 2025 alone, according to research cited by Fortune. That figure tripled from $360 million the year before.
The Deloitte Center for Financial Services projects that generative-AI-enabled fraud losses in the U.S. will climb to $40 billion by 2027, a compound annual growth rate of 32 percent from the $12.3 billion baseline in 2023.
Voice cloning fraud specifically rose 680 percent in a single year, according to data compiled by cybersecurity firm DeepStrike. The volume of deepfakes online exploded from roughly 500,000 in 2023 to over 8 million in 2025, an annual growth rate approaching 900 percent.
And the people losing the most money can afford it the least. Americans aged 55 and older lose an average of $1,298 per phone scam, triple the amount younger adults lose, according to the Hiya report. The FBI's Internet Crime Complaint Center reported that seniors 60 and over lost $4.8 billion to scammers in 2024 alone. The FTC estimates the true cost, including unreported losses, could be as high as $81.5 billion.
How a Three-Second Audio Clip Becomes a Weapon
The technology behind voice cloning has crossed what Siwei Lyu, Professor of Computer Science and Director of the UB Media Forensic Lab at the University at Buffalo, calls the "indistinguishable threshold."
A few seconds of audio, pulled from a TikTok video, a voicemail greeting, a podcast appearance, even a conference call recording, is now enough to generate a synthetic clone that captures natural intonation, rhythm, emphasis, emotion, pauses, and breathing patterns. The clone doesn't just sound like the target. It sounds like the target having a bad day.
"Models produce stable, coherent faces without the flicker, warping, or structural distortions around the eyes and jawline that once served as reliable forensic evidence," Lyu wrote in a December 2025 analysis. The same applies to audio: the artifacts that once marked synthetic speech, the metallic ring, the unnatural cadence, are gone.
Consumer tools from OpenAI, Google, and a wave of startups have made this capability accessible to anyone with a laptop. The barrier to entry for voice fraud has collapsed. What once required a sophisticated criminal operation now requires a browser and a YouTube link.
The scams themselves have evolved accordingly. The old-fashioned "grandparent scam," where a caller claims to be a grandchild in trouble, has been supercharged with AI. Scammers now scrape social media for personal details: a recent vacation photo, a pet's name, a check-in at a restaurant. They weave those details into the call, making the scenario almost impossible to dismiss in the moment.
In one documented pattern, scammers clone a family member's voice and stage a fake kidnapping, complete with crying and panicked pleas for help, then hand the phone to a "kidnapper" who demands a ransom between $2,500 and $15,000. The FBI flagged this specific tactic in a 2025 warning to consumers.
The Corporate Boardroom Isn't Ready Either
It's not just families getting hit. In February 2025, scammers cloned the voice of Italian Defense Minister Guido Crosetto and called some of Italy's most prominent business leaders, including former Inter Milan president Massimo Moratti, fashion icon Giorgio Armani, and tire magnate Marco Tronchetti Provera. The callers, posing as Crosetto's staff, claimed Italian journalists had been kidnapped abroad and needed ransom payments. Moratti wired nearly EUR 1 million to accounts in the Netherlands and Hong Kong before anyone realized the voice was synthetic. Police eventually froze the funds.
In another case cited by James Richardson, Senior Managing Director at global law firm Dentons, attackers created a fake WhatsApp account using a CEO's photo and then staged a Microsoft Teams call with an AI-cloned voice trained on YouTube footage, all to solicit a fraudulent wire transfer from the company's finance team.
Yet only 32 percent of corporate executives say their organizations are prepared to handle a deepfake incident, according to the Fortune analysis. That gap between threat and readiness should alarm every board of directors. If you are interested in how large language models power these voice cloning systems, the underlying architecture is the same transformer technology driving the broader AI revolution.
The "Jury Duty Warrant" scam has also surged. Scammers call claiming to be court officials, telling victims they missed jury duty and face arrest unless they pay an immediate fine, usually by gift card or wire transfer. The FTC and multiple state courts have issued warnings, but the calls keep coming, now often enhanced with cloned voices of local law enforcement.
The Detection Arms Race
The defense side is scrambling to keep pace. Pindrop, an Atlanta-based voice security company, has deployed its Pulse deepfake detection system across banking call centers, claiming a 99 percent detection rate for known deepfake engines and over 90 percent for previously unseen generators, with a false positive rate under one percent. In January 2026, Pindrop partnered with NiCE to bring native deepfake detection to the CXone contact center platform.
McAfee has shipped a consumer-facing Deepfake Detector on Lenovo AI PCs, trained on nearly 200,000 audio samples, that claims 96 percent accuracy and can flag synthetic audio within seconds. The processing happens entirely on-device, which means the audio never leaves the user's computer.
But these tools face a fundamental asymmetry. Defenders need to catch every synthetic voice. Attackers only need to slip through once. And the attackers are iterating faster.
"Simply looking harder at pixels will no longer be adequate," Lyu warned, advocating instead for infrastructure-level protections: cryptographic media authentication, standardized content provenance, and forensic tools that can operate at the network layer before a call ever reaches a consumer's phone.
Gartner has projected that by the end of 2026, 30 percent of enterprises will consider their current identity verification solutions unreliable in isolation because of deepfakes. The implication is clear: voice alone can no longer serve as proof of identity. The broader challenge of AI systems that reason and adapt applies equally to the criminals building these tools and the defenders trying to stop them.
What Consumers and Regulators Are Demanding
The Hiya report reveals a public that has lost patience. Seventy-two percent of consumers support stronger government regulations to force carriers to act. The FCC has already clarified that AI-generated voices fall under existing robocall rules, and the FTC has proposed a comprehensive ban on impersonation fraud that would extend to AI-cloned voices.
But regulation moves slowly. The FTC must issue its AI policy statement by March 11, 2026, and the Commerce Department has been tasked with evaluating whether state-level AI laws, including those in Colorado, California, and Texas, are too burdensome for industry. The outcome of that review will determine whether consumers get real protections or more promises.
Meanwhile, cybersecurity experts and the FTC itself have landed on a decidedly low-tech recommendation: the family safe word. Choose a nonsensical phrase that only your family knows, never share it online, and demand it at the start of any urgent call requesting money. If the caller can't produce it, hang up.
It's a remarkably simple defense against a remarkably sophisticated attack. But as Algard put it: "Scammers are weaponizing AI to clone voices and steal from vulnerable people. We are in an arms race where scammers are using AI as a weapon, which means operators have to use AI as a shield."
The Bottom Line
The phone in your pocket has become a threat vector. One in four Americans have already received a call from someone who wasn't real, and the technology that makes those calls possible is getting cheaper, faster, and more convincing every month. The carriers admit they're being outpaced. The regulators are still writing rules. The detection tools are promising but perpetually one step behind.
What the Hiya report makes painfully clear is that the economics of voice fraud have tipped decisively in the scammer's favor. Cloning a voice costs nearly nothing. Defending against it costs billions. And the victims, disproportionately seniors and families caught in moments of panic, bear the full weight of the loss.
Sharon Brightwell eventually got the call she needed: the real one, from her actual daughter, confirming she was safe. Police recovered about half the money. The rest never came back.
Pick a safe word. Tell your parents. Do it today.
Sources
- State of the Call 2026: AI Deepfake Voice Calls Hit 1 in 4 Americans (March 2, 2026)
- Boards Aren't Ready for the AI Age: What Happens When Your CEO Gets Deepfaked (March 3, 2026)
- 2026 Will Be the Year You Get Fooled by a Deepfake, Researcher Says (December 27, 2025)
- Dover Woman Loses $15K After Scammers Used AI to Impersonate Daughter (July 2025)
- Vishing Statistics 2025: AI Deepfakes and the $40B Voice Scam Surge (2025)
- Financial Fraud Cost Older Adults Up to $81.5 Billion in 2024, FTC Estimates (December 13, 2025)
- Scammers Clone Italian Defence Minister's Voice with AI in Ransom Scheme (February 10, 2025)
- Fighting Back Against Harmful Voice Cloning (FTC Consumer Advice)
- Pindrop Pulse for Audio Deepfake Detection (2026)
- Seniors Lost $4.8 Billion to Scammers in 2024: FBI (2025)