The Deepfake Fraud Epidemic#
AI-generated voice cloning and video deepfakes have emerged as one of the fastest-growing categories of fraud. Financial losses from deepfake-enabled fraud exceeded $200 million in the first quarter of 2025 alone, and the technology is becoming more accessible every day.
Scammers need as little as three seconds of audio to create a voice clone with an 85% match to the original speaker. This source audio is easily scraped from social media posts, podcasts, corporate webinars, or YouTube videos. Studies show humans mistake AI voices for real ones about 80% of the time in short clips.
The legal system is scrambling to respond. Who bears liability when an AI clones someone’s voice, the platform that enabled it, the scammer who used it, or the social media source that provided the voice samples? Courts, regulators, and legislatures are grappling with these questions as losses mount.
Major Deepfake Fraud Incidents#
Arup Engineering Firm - $25 Million Video Deepfake (2024)#
In early 2024, British engineering giant Arup became the victim of the largest documented deepfake fraud to date.
How the Scam Worked:
- A finance employee in Hong Kong received an email from an account claiming to be Arup’s CFO requesting confidential transactions
- The employee initially suspected phishing, but agreed to join a video conference call
- The call appeared to include the CFO and several other senior executives, all were deepfakes
- Reassured by the “live” video, the employee authorized 15 transactions totaling $25.6 million (200 million Hong Kong dollars) to fraudsters’ accounts
The Aftermath:
- Arup confirmed the incident to Hong Kong police in January 2024
- The stolen funds were never recovered
- Arup stated: “Our financial stability and business operations were not affected and none of our internal systems were compromised”
Lessons for Corporate Liability:
The Arup case demonstrates that even sophisticated organizations with trained employees can fall victim to deepfake fraud. Key questions include:
- Does the company have adequate verification protocols for high-value transactions?
- Should video calls be trusted for financial authorization?
- What duty of care do companies owe shareholders when AI fraud defeats internal controls?
Sharon Brightwell Voice Clone Scam - $15,000 (2025)#
In July 2025, Sharon Brightwell of Dover, Florida, received a phone call from her “daughter,” who was crying and claimed she had been in a car accident. The voice pleaded for immediate financial help. Overwhelmed by emotion, Brightwell sent $15,000 in cash to a courier, only to discover it was an AI-generated clone of her daughter’s voice.
This case exemplifies the “grandparent scam” that has plagued seniors for years, now supercharged by AI voice cloning. Of victims who were targeted by voice clone scams and confirmed financial loss, an alarming 77% reported losing money.
Gary Schildhorn Senate Testimony - “No Remedy”#
Philadelphia attorney Gary Schildhorn testified before the Senate Special Committee on Aging about his near-miss with AI voice fraud.
The Attempted Scam:
- Schildhorn received a call from his “son” claiming to be in jail after a car accident involving a pregnant woman
- An “attorney” called next, requesting cryptocurrency to post bail
- Schildhorn nearly sent $9,000 before confirming with his daughter-in-law it was a scam
The “No Remedy” Problem:
After avoiding the scam, Schildhorn contacted local authorities and the FBI. They told him they couldn’t act because he hadn’t actually transferred money.
His testimony to Congress was damning:
“It’s fundamental if we’re harmed by somebody, there’s a remedy either through the legal system or through law enforcement. In this case, there is no remedy, and that fundamental basis is broken.”
“You can’t find anyone to sue.”
This highlights the core liability challenge: deepfake scammers often operate overseas, use cryptocurrency, and leave no traceable path for civil recovery or criminal prosecution.
Federal Regulatory Response#
FCC Declares AI Robocalls Illegal (February 2024)#
On February 8, 2024, the Federal Communications Commission unanimously ruled that AI-generated voices in robocalls are “artificial or prerecorded voices” under the Telephone Consumer Protection Act (TCPA).
Key Provisions:
- Calls using AI voice cloning are illegal unless callers obtain prior express consent
- The FCC can fine violators up to $23,000 per call
- State Attorneys General can pursue civil enforcement
- Individual consumers can sue for up to $1,500 per unwanted call
Context:
The ruling came after AI-generated robocalls impersonating President Biden reached thousands of New Hampshire voters before the state’s primary, discouraging Democrats from voting.
FCC Chairwoman Jessica Rosenworcel stated: “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters.”
Practical Impact:
The ruling provides enforcement tools but has limited effect on overseas scammers. Its primary value is enabling prosecution of domestic actors and telecom companies that facilitate AI robocalls.
TAKE IT DOWN Act (May 2025)#
On May 19, 2025, President Trump signed the TAKE IT DOWN Act into law, the first federal legislation specifically limiting harmful AI deepfakes.
Criminal Provisions:
- Criminalizes knowingly publishing non-consensual intimate imagery, including AI-generated deepfakes
- Penalties include fines and up to three years in prison
- Criminal provisions took effect immediately upon signing
Platform Obligations:
- Covered platforms (websites and apps) must establish notice-and-takedown processes
- Platforms must remove reported deepfake content within 48 hours
- Platforms have until May 19, 2026 to implement takedown procedures
Limitations:
The Act focuses primarily on intimate imagery rather than voice cloning fraud. Critics including the Electronic Frontier Foundation have raised concerns about vague language that could affect legitimate speech.
FTC Voice Cloning Challenge (2024)#
The FTC’s Voice Cloning Challenge sought technological solutions to deepfake fraud, with winners announced in April 2024.
Winning Approaches:
- AI Detect (OmniSpeech) - Uses AI algorithms to distinguish genuine from synthetic voice patterns
- DeFake (Washington University) - Adds imperceptible distortions to voice samples that make cloning more difficult
- Pindrop - Real-time voice clone detection evaluating calls in two-second chunks
- OriginStory - Authenticates human voices and embeds watermarks
Key Takeaway:
“There is no single solution to this problem,” the FTC noted. The challenge highlighted that protection requires multiple intervention points: upstream prevention, real-time detection, and post-use evaluation.
Pending Federal Legislation: NO FAKES Act#
The NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), reintroduced in April 2025, would create the first federal intellectual property protection for voice and likeness.
Key Provisions:
- Creates federal “right of publicity” for name, image, likeness, and voice
- Protection extends 70 years after death (controllable by heirs)
- Requires platforms to “promptly remove unauthorized deepfakes” upon notice
- Carveouts for news, parody, historical works, and criticism
Industry Support:
The bill has backing from SAG-AFTRA, RIAA, MPA, YouTube, and OpenAI, a rare coalition spanning entertainment and technology.
State Law Developments#
Tennessee ELVIS Act (2024)#
Tennessee’s ELVIS Act (Ensuring Likeness Voice and Image Security Act), signed March 21, 2024, became the first state law specifically protecting against AI voice cloning.
Key Features:
- Explicitly includes a person’s voice as a protected property right
- “Voice” defined to include actual voice and AI simulations
- Creates liability for anyone who “makes available an algorithm, software, tool, or other technology” with the “primary purpose” of creating unauthorized voice recordings
- Allows treble damages for knowing violations
Penalties:
- Civil cause of action plus potential criminal prosecution
- Class A misdemeanor: up to 11 months, 29 days in jail and/or fines up to $2,500
- Treble damages plus attorney’s fees for unauthorized use of military members’ voices
Legislative History:
The ELVIS Act passed unanimously (93-0 in the House, 30-0 in the Senate) and took effect July 1, 2024. It amends Tennessee’s 1984 right of publicity law originally enacted after litigation over Elvis Presley’s estate.
Pennsylvania Act 35 (2025)#
On July 7, 2025, Pennsylvania Governor Shapiro signed Act 35 of 2025, classifying deepfakes as “digital forgeries.”
Criminal Provisions:
- Third-degree felony for creating AI-generated fake voices, images, or videos intended to injure, exploit, or defraud
- Misdemeanor for non-consensual impersonation without fraudulent intent
Exceptions:
- Satire and content in the public interest
- Technology companies providing deepfake creation tools (if they didn’t intentionally facilitate malicious use)
- Platforms disseminating content (if they didn’t intentionally facilitate creation)
- Disclaimer defense: labeling content as fake
Significance:
The law demonstrates states’ willingness to impose felony-level penalties for deepfake fraud while carving out protections for legitimate uses and technology providers.
The Liability Framework#
Who Bears Responsibility?#
When AI voice cloning enables fraud, liability could potentially attach to multiple parties:
1. The Scammer
Direct liability is clearest, intentional fraud is actionable under state and federal law. But practical recovery is often impossible:
- Scammers frequently operate overseas
- Cryptocurrency payments are difficult to trace
- As Schildhorn testified: “There is no remedy”
2. AI Platform Developers
Tennessee’s ELVIS Act creates liability for those who “make available” voice cloning technology if its “primary purpose” is unauthorized recordings. But general-purpose AI platforms may escape liability:
- First Amendment protections for neutral tools
- Section 230 may protect platforms for user-generated content
- Pennsylvania’s Act 35 explicitly exempts technology companies that don’t “intentionally facilitate” malicious use
3. Social Media Platforms (Source of Voice Data)
Could platforms where voice samples are harvested face liability?
- No direct precedent exists
- TAKE IT DOWN Act requires takedown of deepfake content, not prevention of voice harvesting
- Potential negligence claims for inadequate security of voice data remain untested
4. Financial Institutions
Could banks or payment processors face liability for facilitating deepfake fraud transfers?
- Traditional wire fraud liability may apply
- Emerging questions about duty to implement deepfake detection
- The Arup case involved 15 separate transfers without additional verification
5. Employers (Vicarious Liability)
When employees are deceived by deepfake impersonations of executives:
- Traditional respondeat superior may not apply (employee was deceived, not negligent)
- Questions of adequate training and verification protocols
- Corporate duty of care to shareholders for fraud prevention
Emerging Legal Theories#
Product Liability:
Could AI voice cloning tools be treated as defective products?
- Design defect: failure to implement safeguards against fraud
- Failure to warn: inadequate disclosure of misuse potential
- Manufacturing defect: security vulnerabilities enabling unauthorized access
Negligence:
- Duty of care to implement voice authentication safeguards
- Foreseeability of deepfake fraud when providing cloning technology
- Causation linking platform to specific fraud incidents
Right of Publicity:
- Tennessee’s ELVIS Act and the pending NO FAKES Act create property rights in voice
- Unauthorized commercial use of cloned voice = publicity right violation
- Does non-commercial fraud constitute “use” under publicity law?
The Emerging Standard of Care#
For Businesses#
1. Transaction Verification Protocols
The Arup case demonstrates that video calls can no longer be trusted for high-value authorizations:
- Implement multi-factor verification for transactions above thresholds
- Require callback to known numbers (not numbers provided in the call)
- Consider in-person verification for extraordinary requests
- Establish code words known only to authorized personnel
2. Employee Training
- Train staff to recognize deepfake red flags
- Emphasize that “seeing is not believing” in the AI era
- Create clear escalation procedures for suspicious requests
- Document training for potential negligence defense
3. Incident Response
- Preserve all communications immediately
- Report to FBI’s IC3 (Internet Crime Complaint Center)
- Engage counsel experienced in AI fraud
- Notify insurers promptly
For AI Platform Developers#
1. Know Your Customer
- Implement identity verification for voice cloning users
- Maintain records of who creates what cloned voices
- Consider requiring consent verification from voice owners
2. Technical Safeguards
- Embed watermarks identifying AI-generated audio
- Implement detection capabilities for cloned content
- Rate-limit cloning to prevent mass production
3. Terms of Service
- Explicitly prohibit fraudulent use
- Reserve right to cooperate with law enforcement
- Require users to obtain voice owner consent
4. Disclosure
- Warn users about potential misuse
- Document safety measures for regulatory compliance
- Prepare for discovery in fraud litigation
For Individuals#
1. Protect Your Voice
- Consider privacy settings on social media videos
- Be aware that podcasts, webinars, and YouTube create cloning opportunities
- Establish family code words for emergency verification
2. Verify Before Sending Money
- Never trust voice alone for financial requests
- Call back on known numbers
- Ask questions only the real person would know
- Be especially cautious with urgency and secrecy demands
3. Report Incidents
- File complaints with FTC even if money wasn’t lost
- Report to state Attorney General
- Document everything for potential future enforcement
Looking Forward#
The deepfake liability landscape is evolving rapidly:
Pending Legislation: The NO FAKES Act could create the first federal right of publicity, establishing clearer liability for unauthorized voice cloning.
State Action: More states are following Tennessee and Pennsylvania with criminal and civil penalties for malicious deepfakes.
Technology Solutions: The FTC Voice Cloning Challenge highlighted emerging detection and watermarking technologies that could become standard of care requirements.
Enforcement Gaps: Until cryptocurrency tracing and international cooperation improve, civil recovery from scammers will remain difficult.
Insurance Evolution: Professional liability and cyber policies are beginning to address deepfake fraud, but coverage gaps remain significant.
The fundamental challenge remains the one Gary Schildhorn identified: when AI enables fraud at scale, the legal system struggles to provide remedies. Companies, platforms, and individuals must increasingly focus on prevention, because once the money is gone, it’s usually gone.