Skip to main content
  1. AI Legal Resources/

AI Defamation and Hallucination Liability

Table of Contents

The New Frontier of Defamation Law
#

Courts are now testing what attorneys describe as a “new frontier of defamation law” as AI systems increasingly generate false, damaging statements about real people. When ChatGPT falsely accused a radio host of embezzlement, when Bing confused a veteran with a convicted terrorist, when Meta AI claimed a conservative activist participated in the January 6 riot, these weren’t glitches. They represent a fundamental challenge to defamation law built on human publishers and human intent.

The core problem: Large language models “hallucinate.” They generate plausible-sounding but completely fabricated information, including false accusations of crimes, professional misconduct, and other reputation-destroying statements about real, identifiable people. The legal question: who is liable when AI defames?

Categories of AI Defamation
#

Legal experts identify four primary ways AI systems generate defamatory content:

1. Hallucination
#

The AI fabricates information entirely. ChatGPT invented a fraud case against Mark Walters that never existed. Meta AI fabricated Robby Starbuck’s participation in the Capitol riot. There was no underlying source, the AI created false facts from nothing.

2. Juxtaposition
#

Truthful facts about different people get conflated, falsely implying they describe the same person. Microsoft’s Bing combined accurate information about veteran Jeffery Battle with facts about terrorist Jeffrey Battle (spelled differently), creating a defamatory portrait of the wrong person.

3. Libel by Omission
#

AI leaves out critical context that fundamentally changes meaning. Brian Hood was the whistleblower who exposed a bribery scandal, but ChatGPT falsely portrayed him as a convicted participant who served prison time.

4. Misquote
#

AI attributes words inaccurately to real people, fabricating quotes they never said or positions they never took.

Landmark Cases
#

Walters v. OpenAI (Georgia, 2023-2025)
#

The first major U.S. lawsuit to address AI defamation produced a significant precedent, though not the outcome plaintiff Mark Walters sought.

The Facts:

  • A journalist researching a lawsuit involving the Second Amendment Foundation asked ChatGPT for a summary
  • ChatGPT falsely claimed Mark Walters, a radio host, had been sued for “defrauding and embezzling funds” while serving as the organization’s treasurer
  • Walters had no connection to the organization and was never accused of any wrongdoing
  • Walters filed a defamation lawsuit in June 2023

The Ruling (May 2025): Judge Tracie Cason granted summary judgment to OpenAI on three independent grounds:

  1. Not “reasonably understood as describing actual facts”: The court found ChatGPT’s warnings, that it “can and does sometimes provide factually inaccurate information”, meant reasonable users wouldn’t take outputs as verified truth

  2. No fault established: OpenAI’s “extensive warnings to users that errors of this kind could occur negate any possibility that a jury could find OpenAI acted with actual malice”

  3. No damages: The journalist who received the false information recognized it as untrue within 90 minutes and never republished it; Walters conceded he suffered no quantifiable harm

Key Takeaway: Strong disclaimers and “industry-leading efforts” to reduce hallucinations may protect AI developers, but only when plaintiffs cannot show publication to believing audiences or resulting damages.

Starbuck v. Meta (Delaware, 2025)
#

This case produced a rapid settlement with significant implications.

The Facts:

  • Conservative activist Robby Starbuck was notified in August 2024 that Meta AI was spreading false information about him
  • Meta AI claimed he had “pled guilty over disorderly conduct” on January 6, was “linked to the Q-Anon conspiracy,” and was “anti-vaccine”
  • By April 2025, Meta AI’s voice feature was claiming Starbuck “poses a significant threat to his children’s wellbeing” and “authorities should consider removing parental rights”
  • Starbuck filed suit April 29, 2025, seeking over $5 million

The Settlement (August 2025): Meta settled after chief global affairs officer Joel Kaplan publicly apologized, calling the AI errors “unacceptable.” As part of the settlement, Starbuck became a consultant to Meta’s Product Policy team to address political bias and hallucinations.

Key Takeaway: When AI defamation continues after notice, companies face heightened liability risk. Meta’s settlement suggests AI companies may prefer to resolve claims quickly rather than test Section 230 defenses.

Battle v. Microsoft (Maryland, 2023-2024)
#

The Facts:

  • Aerospace educator and Air Force veteran Jeffery Battle sued Microsoft after Bing’s AI confused him with convicted terrorist Jeffrey Leon Battle
  • A Bing search displayed an AI-generated summary stating the veteran had “been sentenced for seditious conspiracy” for trying to join the Taliban
  • The AI combined accurate facts about two different people with similar names into a defamatory portrait

The Outcome: In October 2024, the court granted Microsoft’s motion to compel arbitration, staying court proceedings. The case now proceeds privately.

Key Takeaway: Arbitration clauses in terms of service may shield AI companies from public litigation, though the underlying defamation questions remain unresolved.

Wolf River Electric v. Google (Minnesota, 2025)
#

The Facts:

  • Solar contractor Wolf River Electric discovered Google’s AI Overview falsely claimed the Minnesota Attorney General was suing them for “deceptive sales practices” and “misleading customers”
  • No such lawsuit existed:Google’s AI hallucinated it entirely
  • The company documented specific business losses: a $150,000 contract termination and $174,000 in canceled projects from customers who cited the false AG lawsuit

Current Status: Filed March 2025, the case was moved to federal court. Wolf River seeks $110-210 million in damages across five counts including defamation per se and deceptive trade practices violations.

Key Takeaway: Business defamation may prove easier to litigate than individual claims because companies can document concrete lost revenue. This case may produce the first detailed judicial analysis of how defamation law applies to AI Overviews.

International: The Holmen Case (Norway, 2025)
#

The Facts:

  • Norwegian citizen Arve Hjalmar Holmen, a private individual with no public profile, discovered ChatGPT was claiming he murdered two of his children and was sentenced to 21 years in prison
  • The claim was entirely fabricated, though ChatGPT included some accurate details (his hometown, that he has children) that made the fabrication more believable
  • Advocacy group noyb filed a GDPR complaint with Norway’s Data Protection Authority

The Claims: Rather than pursuing defamation, noyb argues OpenAI violates GDPR Article 5(1)(d), which requires personal data to be “accurate and kept up to date.” The complaint seeks an order to delete defamatory outputs and fine-tune models to eliminate inaccuracies.

Key Takeaway: European data protection law may provide an alternative framework for addressing AI hallucinations about individuals, with potentially broader remedies than defamation law.

Australian Mayor Brian Hood (2023)
#

The Facts:

  • ChatGPT falsely told users that Hepburn Shire mayor Brian Hood was jailed in an international bribery scandal involving the Reserve Bank of Australia
  • Hood was actually the whistleblower who exposed the scandal, not a participant
  • His lawyers sent OpenAI a concerns notice in March 2023, giving 28 days to correct the errors

Outcome: OpenAI reportedly corrected the output before a formal lawsuit was filed. The case established that the threat of litigation could prompt AI companies to address specific hallucinations.

Who Is Liable When AI Defames?
#

The Developer/Platform
#

AI companies face the strongest liability arguments:

  • They are the “author” and “publisher”: Unlike traditional platforms hosting user content, AI companies generate the defamatory statements themselves
  • They profit from the technology: Commercial deployment creates duty-of-care arguments
  • They control training and outputs: Design choices about training data, safety systems, and verification affect hallucination rates

The Walters decision shows disclaimers and safety efforts may provide some protection, but companies that fail to correct errors after notice (as in Starbuck) face heightened exposure.

The User Who Prompts
#

Limited liability for private use: If you ask ChatGPT about someone and it hallucinates a false response you never share, you haven’t “published” anything.

Liability for republication: Anyone who uses AI to generate information about a person and then conveys it to others may be liable if the content is false and defamatory. Because AI systems are known to hallucinate, failure to independently verify accuracy before publication likely constitutes negligence.

The Organization Deploying AI
#

Companies that integrate AI into customer-facing products, chatbots, search features, recommendation systems, bear responsibility for the outputs. The Air Canada chatbot case established that companies cannot claim their AI operates as a “separate legal entity.”

Does Section 230 Protect AI Companies?
#

Likely not for generated content. Section 230 protects platforms from liability for content “provided by another information content provider”, third-party content.

When AI generates content rather than hosting it, the platform itself becomes the “information content provider.” As legal scholar Collin Walke observes: “AI platforms should not receive Section 230 protection because the content is generated by the platform itself.”

The Wolf River Electric case may produce definitive guidance. University of Minnesota Law School Dean William McGeveran notes that “federal courts have been very receptive” to Section 230 defenses, but the unique nature of AI-generated content, not user-submitted content, challenges traditional analysis.

Key distinction:

  • Traditional search: Indexing and displaying third-party content → Section 230 likely applies
  • AI Overviews/Chatbots: Generating new statements by synthesizing sources → Section 230 likely doesn’t apply

Elements of AI Defamation Claims
#

Traditional defamation requires proving:

1. Publication
#

The false statement was communicated to someone other than the plaintiff. AI outputs viewed by users satisfy this element. The Walters case shows problems arise when only one person saw the statement and immediately recognized it as false.

2. Falsity
#

The statement must be false. AI hallucinations are by definition false, they fabricate information that doesn’t exist.

3. Identification
#

The statement must be “of and concerning” the plaintiff. AI systems often correctly identify real people by name, occupation, and location, then attribute false facts to them.

4. Fault
#

  • Private figures: Plaintiff must prove defendant was at least negligent
  • Public figures: Plaintiff must prove “actual malice”, knowledge of falsity or reckless disregard for truth

The AI malice problem: Proving a human publisher “knew” something was false or “entertained serious doubts” about truth makes sense. How does this apply to AI systems that lack knowledge and intent?

Courts may focus on the company’s conduct:

  • Did they know hallucinations occurred?
  • Did they continue operating after receiving complaints about specific false statements?
  • Were safety systems adequate given known risks?

The Starbuck case suggests continuing to publish after notice may establish the “reckless disregard” required for public figures.

5. Damages
#

The plaintiff must show harm to reputation. General reputation damage may suffice, but Walters shows that absent concrete evidence of lost business, relationships, or opportunities, damages claims may fail.

Per se defamation: Some statements, accusations of crimes, professional misconduct, serious disease, are defamatory per se, meaning damages are presumed. AI systems regularly hallucinate exactly these categories of false statements.

Practical Implications
#

For Individuals Defamed by AI
#

Document everything:

  • Screenshot the AI output with timestamps
  • Record the specific prompt used (if you know it)
  • Preserve evidence before the company corrects the output
  • Document any concrete harms: lost employment, terminated relationships, denied opportunities

Send a notice:

  • Formal notice to the AI company creates a record
  • Continued publication after notice may establish reckless disregard
  • Many companies will correct specific outputs to avoid litigation

Consider your status:

  • Private figures face lower burdens (negligence) than public figures (actual malice)
  • The higher actual malice standard may be difficult to meet for AI-generated content

For AI Deployers
#

Disclaimers help but aren’t sufficient:

  • Walters shows disclaimers factor into “reasonable understanding” analysis
  • But disclaimers won’t protect against clear factual statements about identifiable people

Respond to complaints promptly:

  • Continued publication after notice increases liability risk
  • Starbuck settlement shows the cost of ignoring complaints

Implement verification for high-risk outputs:

  • Content about specific individuals carries defamation risk
  • Content about criminal accusations, professional misconduct carries heightened risk
  • Consider additional verification before generating or displaying such content

For Publishers Using AI
#

Verify before publishing:

  • Republishing AI-generated content about real people without verification likely constitutes negligence
  • AI systems are known to hallucinate, reliance without checking is difficult to defend

Maintain records:

  • Document your verification process
  • Show you didn’t blindly republish AI outputs

Legislative Responses
#

Texas Responsible AI Governance Act (2025)
#

Signed June 2025, this law creates liability with fines up to $200,000 per violation for certain intentional AI abuses. Enforcement is limited to the state attorney general.

State Deepfake Laws
#

Twenty states have enacted laws targeting AI deepfakes in elections. While focused on manipulated media rather than text hallucinations, these laws establish precedent for AI-specific liability frameworks.

Federal Action
#

Senator Josh Hawley introduced the Artificial Intelligence Risk Evaluation Act of 2025 to create a federal AI oversight framework. Comprehensive federal AI liability legislation remains stalled in Congress.

The Path Forward
#

AI defamation cases are proliferating. Within a 13-month window from June 2023 to August 2024, multiple high-profile cases emerged. More are being filed as awareness grows.

Courts are developing principles:

  • Disclaimers and safety efforts factor into fault analysis
  • Continued publication after notice increases liability
  • Section 230 faces serious challenges when AI generates rather than hosts content
  • Concrete damages remain essential, general assertions of reputational harm may not suffice

For now, anyone deploying AI that generates statements about real people operates in legal uncertainty. The prudent approach: treat AI outputs about identifiable individuals as potential liability sources requiring verification, implement robust complaint-response systems, and maintain documentation of safety efforts.

The question isn’t whether AI defamation liability will be established, it’s how courts will adapt century-old defamation principles to technology that generates plausible falsehoods at unprecedented scale.

Resources
#

Related

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Debt Collection and FDCPA Violations: Legal Guide

When AI Becomes the Debt Collector # The debt collection industry, historically notorious for harassment and intimidation, is rapidly adopting artificial intelligence. AI chatbots can contact millions of debtors in days. Voice cloning technology creates synthetic agents indistinguishable from humans. Algorithmic systems decide who gets sued, when to call, and how aggressively to pursue payment.

AI Employment Discrimination Tracker: Algorithmic Hiring, EEOC Enforcement & Bias Cases

AI in Employment: The New Discrimination Frontier # Artificial intelligence has transformed how companies hire, evaluate, and fire workers. Resume screening algorithms, video interview analysis, personality assessments, performance prediction models, and automated termination systems now influence employment decisions affecting millions of workers annually. But as AI adoption accelerates, so does evidence that these systems perpetuate, and sometimes amplify, discrimination based on race, age, disability, and gender.