The Epidemic in Numbers#
AI-generated fake legal citations have become a crisis in American courts. What began as an isolated incident in 2023 has exploded into a systemic problem threatening the integrity of legal proceedings.
- 200+ documented incidents of legal hallucinations in U.S. courts (Stanford Law database)
- From 2 cases/week to 2-3 cases/day - the acceleration in 2025
- 21 of 23 citations fabricated in one California appellate brief
- Multiple AI tools implicated: ChatGPT, Claude, Gemini, Perplexity, Microsoft Copilot, Google Bard
Landmark Cases#
Mata v. Avianca, Inc. (S.D.N.Y. 2023)#
The case that started it all. Attorney Steven Schwartz, with 30+ years of experience, submitted a brief containing six completely fabricated cases generated by ChatGPT.
What Happened:
- Client Roberto Mata sued Avianca Airlines for personal injury
- When Avianca moved to dismiss on statute of limitations, Schwartz used ChatGPT for research
- ChatGPT generated fake cases: Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, Varghese v. China Southern Airlines
- When asked if the cases were real, ChatGPT doubled down: “Yes,” they “can be found in reputable legal databases such as LexisNexis and Westlaw”
The Sanctions:
- $5,000 fine against attorneys Peter LoDuca, Steven Schwartz, and their firm
- Required to send letters to each judge falsely identified as authoring the fake opinions
- Court found subjective bad faith under Rule 11 - not merely for using AI, but for:
- Failing to verify citations
- Swearing to the truth of false affidavits
- Continuing to defend fake cases after red flags emerged
Key Quote:
“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance… However, existing rules impose upon lawyers a duty to ensure the accuracy of their filings.”
Noland v. Land of the Free (Cal. Ct. App. September 2025)#
The first state appellate court to sanction an attorney for AI hallucinations.
Facts:
- Attorney Amir Mostafavi used ChatGPT, Claude, Gemini, and Grok to draft appellate briefs
- 21 of 23 case quotations in plaintiff’s opening brief were fabricated
- Attorney submitted multiple briefs with fake citations even after being warned
Consequences:
- $10,000 sanctions
- Reported to the California State Bar
- Required to show the opinion to his client and certify compliance
- Lost the underlying case
Why It Matters: This was the first state Court of Appeal opinion sanctioning a lawyer specifically for AI-generated hallucinations, establishing precedent for state courts nationwide.
MyPillow/Lindell Case (D. Colo. July 2025)#
High-profile defamation litigation exposed AI misuse by attorneys representing MyPillow CEO Mike Lindell.
Facts:
- Two attorneys submitted a motion filled with mistakes and nonexistent case citations
- When questioned about AI use, attorneys were not forthcoming
Sanctions:
- $3,000 each for two attorneys
- Court specifically noted the lack of candor when questioned
Colorado Supreme Court Discipline (2025)#
The first state supreme court disciplinary proceeding resulting in suspension for AI hallucinations.
Facts:
- Denver attorney caught using ChatGPT to draft motions
- Denied using AI when confronted
- Investigation revealed text message to paralegal: “like an idiot,” he hadn’t checked ChatGPT’s work
Consequence:
- 90-day suspension from the practice of law
- Accepted discipline rather than contest charges
Arizona Social Security Case (D. Ariz. August 2024)#
Facts:
- Attorney submitted brief “replete with citation-related deficiencies, including those consistent with artificial intelligence generated hallucinations”
- 12 of 19 cases cited were fabricated, misleading, or unsupported
Ruling:
- U.S. District Judge Alison Bachus sanctioned the attorney
- Court noted the systemic nature of the errors indicating AI generation
United States v. Cohen (S.D.N.Y.)#
A notable case where no sanctions were imposed - illustrating the boundaries.
Facts:
- Michael Cohen (former Trump attorney) submitted fake cases
- Used Google Bard, believing it to be a “super-charged search engine”
- Cohen’s attorney had not acted in bad faith
Ruling:
- Judge Jesse Furman declined sanctions
- Found no subjective bad faith - distinguished from Mata v. Avianca
Key Distinction: The Cohen ruling shows that sanctions require bad faith, not merely negligent AI use. However, lack of sanctions doesn’t mean lack of professional consequences.
AI Tools Implicated#
Every major generative AI platform has produced legal hallucinations that made it into court filings:
| AI Tool | Notable Incidents |
|---|---|
| ChatGPT | Mata v. Avianca, Noland v. Land of the Free, Colorado discipline |
| Claude (Anthropic) | Noland v. Land of the Free |
| Google Gemini | Noland v. Land of the Free |
| Google Bard | United States v. Cohen |
| Microsoft Copilot | Multiple federal court incidents |
| Perplexity | Documented in 2025 cases |
| Grok (xAI) | Noland v. Land of the Free |
Court Rules on AI Disclosure#
Courts have responded with a patchwork of standing orders and rules:
Federal Courts#
Fifth Circuit (2024)
- Closed comment period for rule requiring certification of AI use
- “Material misrepresentation” could lead to document being stricken or sanctions
Judge Brantley Starr (N.D. Texas)
- First federal judge to issue AI standing order
- Requires certificate attesting: (1) no AI used, OR (2) AI-drafted text verified for accuracy
Judge Stephen Vaden (U.S. Court of International Trade)
- Requires disclosure of specific AI programs used
- Requires identification of all AI-drafted portions
- Requires certification that AI use didn’t disclose confidential information
Judge Michael Baylson (E.D. Pennsylvania)
- Broader order requiring disclosure of any type of AI (not limited to generative AI)
Judge Rita Lin (N.D. California)
- More permissive approach: AI use “not prohibited”
- Attorneys must “personally verify the accuracy” of AI research
- “Counsel alone bears ethical responsibility for all statements”
Judge Peter Kang (N.D. California)
- Distinguishes generative AI from traditional tools
- Disclosure not required for “traditional legal research, word processing, spellchecking, grammar checking or formatting software”
State Courts#
Illinois Supreme Court (December 2024)
- “Pro-AI” stance
- Discourages mandatory AI disclosures in state courts
- Recommends judges not require AI disclosure in pleadings
- Relies on existing Rule 11 and competence duties as sufficient
This split between federal and Illinois state court approaches illustrates the lack of consensus.
State Bar Ethics Guidance#
ABA Formal Opinion 512 (July 2024)#
The American Bar Association’s first formal guidance on generative AI addresses core ethical obligations:
Competence (Rule 1.1)
- Lawyers must understand AI tools to use them competently
- Must possess or acquire skill to use AI “effectively and ethically”
Confidentiality (Rule 1.6)
- Must protect client information when using AI
- Warn against inputting sensitive data into systems that may share with third parties
Communication (Rule 1.4)
- Disclosure of AI use depends on facts of each case
- Required when:
- Client inquires
- Retainer agreement requires it
- Inputting client confidential information into AI
- AI use affects fee reasonableness
Fees (Rule 1.5)
- AI efficiencies must benefit client financially
- Cannot bill for hours not genuinely worked
- Must ensure fees remain reasonable given AI assistance
State-by-State Survey#
States with Formal AI Ethics Guidance:
| State | Opinion | Key Requirements |
|---|---|---|
| California | Practical Guidance (Nov 2023) | First state guidance; verify outputs; protect confidentiality; don’t bill for time saved |
| Texas | Opinion 705 (Feb 2025) | Competence to understand AI; protect confidentiality; pass efficiency savings to clients |
| Florida | Opinion 24-1 (Jan 2024) | AI permitted; prioritize confidentiality, accuracy, ethical billing; advertising rules |
| New York | NYSBA Task Force + NYC Bar 2024-5 | Comprehensive framework; refusing AI may raise competence concerns |
| Pennsylvania | Joint Opinion 2024-200 | Inform clients of AI use; disclose AI expenses; addresses bias |
| North Carolina | 2024 FEO 1 | Substantive AI delegation = outsourcing requiring client consent |
| Kentucky | KBA E-457 (March 2024) | No routine disclosure required unless outsourced or client charged |
| D.C. | Ethics Opinion 388 | Attorneys’ use of GAI in client matters |
Browse detailed state guides: State AI Legal Ethics Rules
States Without Formal Guidance: Idaho, Indiana, Kansas, Maine, Maryland, Massachusetts, South Carolina, South Dakota, Wisconsin, Wyoming, and others.
Why “The AI Told Me To” Is Not a Defense#
Courts have uniformly rejected attempts to shift blame to AI systems:
The Verification Duty#
Every court addressing AI hallucinations has emphasized that attorneys bear ultimate responsibility for their filings:
“A lawyer who files a legal brief in court asserting facts and making arguments that he did not verify, from an AI program that his own affidavit concedes ‘can also generate incorrect information,’ shows nothing less than subjective bad faith.” , Judge P. Kevin Castel, Mata v. Avianca
No New Legal Defense#
Using AI creates no new defense to malpractice or ethical violations. Traditional duties apply:
- Rule 11 Certification - By signing a filing, attorneys certify factual contentions have evidentiary support and legal contentions are warranted by existing law
- Duty of Competence - Must possess and apply adequate legal knowledge and skill
- Duty of Candor - Owe duty of truthfulness to tribunals
- Supervisory Duty - Must supervise all work, including AI-generated work
The Tool Analogy Fails#
Courts reject treating AI like other legal tools:
- Westlaw/LexisNexis: Retrieve real cases - errors are in interpretation, not existence
- Spell check: Does not generate substantive content
- AI chatbots: Generate novel, unverified content that may be entirely fabricated
Protecting Yourself: Best Practices#
For Attorneys#
- Verify every citation - Confirm each case exists in Westlaw, LexisNexis, or official reporters
- Check quotes - Ensure quoted language actually appears in cited cases
- Shepardize/KeyCite - Confirm cases are still good law
- Document your process - Maintain records of verification steps
- Understand your tools - Know the capabilities and limitations of any AI you use
For Law Firms#
- Establish written AI use policies
- Train associates on verification requirements
- Consider prohibiting AI for certain high-stakes work
- Implement quality control checkpoints
- Review malpractice insurance coverage for AI-related claims
For Clients#
- Ask specifically whether AI was used in your matter
- Request disclosure of AI use in retainer agreements
- Verify that attorneys have quality control processes
- Consider whether AI use affected fees charged
The Malpractice Dimension#
Beyond court sanctions and bar discipline, AI hallucinations create malpractice exposure:
Coverage Questions#
Many malpractice insurers are adding AI-specific provisions. See our comprehensive AI Insurance Coverage Analysis for detailed analysis of emerging AI-specific products, coverage grants, and claims-made issues:
- Some policies now exclude AI-related claims
- Others require disclosure of AI use
- Premium adjustments may apply for heavy AI users
Damages#
AI-generated errors can cause:
- Lost cases (as in Noland - plaintiff lost the appeal)
- Sanctions passed through to clients
- Reputational harm
- Subsequent malpractice claims from clients
Emerging Claims#
As AI use expands, expect claims alleging:
- Failure to use AI (for efficiency/cost)
- Improper use of AI (hallucinations)
- Failure to disclose AI use to clients
- Billing for AI-assisted work at non-AI rates
Frequently Asked Questions#
Can attorneys use AI for legal research at all?
What sanctions can attorneys face for AI hallucinations?
Do I have to disclose AI use to the court?
Do I have to disclose AI use to my client?
Is it malpractice to use AI that generates fake citations?
Which AI tools are safe to use for legal research?
Resources#
- ABA Formal Opinion 512 - First ABA guidance on attorney AI use
- Justia 50-State AI Ethics Survey - State-by-state ethics rules
- Stanford Law Legal Hallucination Database - Tracking of AI incidents in courts
- NYC Bar Formal Opinion 2024-5 - New York guidance
- Texas Ethics Opinion 705 - Texas bar guidance
Concerned About AI in Your Legal Matter?
If you believe your attorney used AI-generated content without proper verification, or if fabricated citations affected your case outcome, you may have grounds for a malpractice claim or bar complaint. Understanding the evolving standards is critical.
Consult a Legal Malpractice Attorney