The legal profession faces unique standard of care challenges as AI tools become ubiquitous in practice. From legal research to document review to contract drafting, AI is transforming how lawyers work, and creating new liability risks. Since the landmark Mata v. Avianca sanctions in June 2023, at least 200+ AI ethics incidents have been documented in legal filings, and every major bar association has issued guidance.
The Hallucination Problem: Mata v. Avianca and Its Aftermath#
The most dramatic AI failures in legal practice involve large language models that confidently fabricate case citations, statutes, and legal principles that do not exist.
Mata v. Avianca, Inc. (S.D.N.Y. 2023)#
In February 2022, Roberto Mata filed a personal injury lawsuit against Avianca Airlines, alleging injury from a metal serving cart during a flight. His attorneys, Peter LoDuca and Steven A. Schwartz of Levidow, Levidow & Oberman P.C., used ChatGPT to prepare a legal motion, which contained six fabricated case citations with fictional airlines, invented quotations, and nonexistent judges.
Timeline:
- Avianca’s counsel notified the Court they could not locate several cited cases
- Judge P. Kevin Castel ordered the attorneys to show cause why they should not be sanctioned
- Attorney Schwartz testified he was “operating under the false perception that this website could not possibly be fabricating cases on its own”
Post-Mata Incidents (2023-2025)#
The Mata case opened the floodgates for AI-related sanctions:
| Date | Case/Jurisdiction | Issue | Outcome |
|---|---|---|---|
| Oct 2024 | S.D. Cal. | Attorney cited 3 nonexistent cases | $15,000 sanctions recommended (Jan 2025) |
| July 2025 | S.D. Miss. (Judge Wingate) | AI-generated TRO with false parties, misquoted laws | Order vacated, investigation |
| July 2025 | D.N.J. (Judge Neals) | Law clerk used ChatGPT without authorization | Order corrected, clerk disciplined |
| 2024-2025 | Multiple states | Bar disciplinary proceedings | Various admonishments |
Federal Judges’ AI Misuse (2025)#
In a stunning development, federal judges themselves were caught using AI improperly. In July 2025:
- Judge Henry T. Wingate (S.D. Miss.) issued a TRO naming plaintiffs and defendants who weren’t parties to the case, misquoting state law, and referencing four individuals who don’t appear in the case
- Judge Julien Xavier Neals (D.N.J.) disclosed that a temporary assistant used ChatGPT “without authorization, without disclosure, and contrary to chambers policy”
Both incidents prompted Senator Chuck Grassley to formally call on the federal judiciary to regulate AI use.
ABA and State Bar Ethics Guidance#
ABA Formal Opinion 512 (July 29, 2024)#
The American Bar Association Standing Committee on Ethics and Professional Responsibility released its first formal opinion on generative AI, establishing the baseline national framework.
Core requirements under Opinion 512:
- Competence (Rule 1.1): Lawyers must understand how AI functions and possess skill to use it effectively
- Confidentiality (Rule 1.6): Client information must be protected when using third-party AI tools
- Communication (Rule 1.4): Clients should be informed when AI use materially affects representation
- Reasonable Fees (Rule 1.5): Billing must reflect actual time; cannot bill full attorney rates for AI-assisted tasks
- Supervision (Rules 5.1, 5.3): Lawyers must supervise AI use by subordinates and staff
State Bar Ethics Opinions (2024-2025)#
Over 40 state bars have now issued AI ethics guidance. Key opinions include:
| State | Opinion | Key Provisions |
|---|---|---|
| California | Nov 2023 | First state guidance; disclosure when AI materially affects representation |
| Florida | Opinion 24-1 (Jan 2024) | Informed consent required before disclosing confidential info to third-party AI |
| New York | Opinion 2024-5 | General guidance; notes tools are “rapidly evolving” |
| Texas | Opinion 705 (Feb 2025) | Attorneys cannot “blindly rely” on AI; must verify all outputs |
| North Carolina | 2024 FEO 1 | Lawyers “may not abrogate their responsibilities” by relying on AI |
| Kentucky | KBA E-457 (Mar 2024) | No disclosure required for routine AI research unless outsourced |
| New Jersey | 2024 | No per se disclosure requirement unless it affects informed decisions |
| D.C. | Opinion 388 | Comprehensive guidance on AI in client matters |
Common Themes Across Jurisdictions#
- No blanket disclosure requirement, Most states don’t require disclosure of all AI use to clients
- Material use may require disclosure, When AI use materially affects representation or involves confidential information
- Court disclosure, Attorneys must comply with any court rules requiring AI certification
- Informed consent for confidential data, Lawyers should obtain consent before inputting confidential information into third-party AI tools
- Supervision obligation, Lawyers remain responsible for supervising AI outputs as they would supervise non-lawyer assistants
Federal and State Court AI Disclosure Rules#
Standing Orders from Individual Judges#
Following Mata v. Avianca, numerous federal judges issued standing orders requiring AI disclosure:
Judge Brantley Starr (N.D. Tex.), First to require counsel to file a certificate “attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy.”
Judge Stephen Alexander Vaden (Ct. Int’l Trade), Requires disclosure of:
- Any generative AI program used
- All portions of text drafted with AI assistance
- Certification that use did not disclose confidential information
Judge Michael Baylson (E.D. Pa.), Requires disclosure of any type of AI, not just generative AI.
District-Wide Rules#
| District | Rule |
|---|---|
| E.D. Tex. | Local rules prohibit generative AI for pro se litigants; counsel must verify AI-generated content |
| E.D. Mo. | Generative AI prohibition applies only to pro se litigants |
Sanctions for Non-Compliance#
Violations can trigger:
- Economic sanctions
- Stricken pleadings
- Contempt findings
- Dismissal of the suit
The Verification Duty#
Courts and bar associations have converged on a clear standard: attorneys must verify all AI-generated legal research.
What Verification Requires#
- Confirming cited cases exist, Check that every citation refers to a real case
- Verifying quotes are accurate, AI frequently fabricates quotations
- Checking legal principles, Ensure statements of law are correct
- Shepardizing/KeyCiting, Confirm precedent is still good law
- Reviewing context, Ensure AI understood the jurisdictional and factual context
Why “The AI Told Me To” Fails#
Courts consistently hold that attorneys bear ultimate responsibility for work product. The attorney, not the AI, is the licensed professional with duties to the court and client. Relying on AI without verification is no different than failing to verify work performed by a paralegal or junior associate.
Confidentiality and Privilege Risks#
Third-Party AI Tools#
When attorneys input client information into third-party AI systems (ChatGPT, Claude, etc.), they may be:
- Disclosing confidential information to unauthorized parties
- Waiving attorney-client privilege
- Violating data protection obligations
Best Practices for Confidentiality#
- Use enterprise versions with data retention agreements
- Avoid inputting privileged communications, litigation strategy, or client identities
- Anonymize or redact sensitive information where possible
- Obtain informed consent before using third-party AI on client matters
- Document AI use in engagement letters and file memos
Document Review and E-Discovery#
AI-assisted document review (Technology-Assisted Review or TAR) is now standard in large-scale litigation.
Proportionality Analysis#
Courts applying Federal Rule of Civil Procedure 26(b)(1) expect parties to use efficient methods for discovery. This increasingly includes AI-assisted review.
Validation Requirements#
Leading practices include:
- Statistical sampling to validate AI classifications
- Quality control protocols for edge cases
- Documentation of training and validation processes
- Expert testimony capabilities regarding methodology
Failure Modes#
Liability may arise from:
- Over-reliance on AI without adequate quality control
- Failure to update AI models as case understanding evolves
- Insufficient training data leading to systematic gaps
- Bias in AI systems affecting protected categories
Contract Drafting and Legal Automation#
As AI drafts more legal documents, verification requirements intensify.
Attorney Responsibility#
- Attorneys remain responsible for all AI-generated work product
- “Form document” defenses may not apply when AI customizes text
- Malpractice insurers are updating policies to address AI risks
Common AI Drafting Errors#
- Inconsistent defined terms
- Missing provisions required by jurisdiction
- Conflicting clauses within the same document
- Inappropriate terms imported from unrelated contexts
Law Firm AI Governance#
Policy Components#
Effective law firm AI policies should address:
| Component | Description |
|---|---|
| Approved tools | List of vetted AI platforms with data agreements |
| Prohibited uses | Activities that cannot be performed with AI |
| Verification protocols | Required steps before submitting AI-assisted work |
| Training requirements | Mandatory training for attorneys and staff |
| Disclosure rules | When to disclose AI use to clients and courts |
| Billing guidelines | How to ethically bill for AI-assisted work |
Supervision Obligations#
Under Rules 5.1 and 5.3, supervising attorneys must:
- Ensure subordinates understand AI verification requirements
- Review AI-assisted work product before filing
- Establish systems for competent AI use
- Monitor compliance with firm AI policies
Federal Judiciary Interim Guidance (July 2025)#
The Administrative Office of the U.S. Courts distributed interim AI guidance developed by a task force formed in early 2025.
Key provisions:
- Allows AI use with guardrails
- Cautions against delegating “core judicial functions to AI, including decision-making or case adjudication”
- Recommends “extreme caution” for novel legal questions
- Requires independent verification of all outputs
- States that “users are accountable for all work performed with the assistance of AI”
Frequently Asked Questions#
Do I have to disclose to clients that I'm using AI?
Can I be sanctioned for citing AI-generated cases that don't exist?
What does ABA Formal Opinion 512 require?
Can I input confidential client information into ChatGPT or other AI tools?
Do any courts require AI disclosure in filings?
How should I bill for AI-assisted work?
Related Resources#
On This Site#
- Legal AI Hallucinations, Documented AI hallucination incidents in legal practice
- State AI Legal Ethics Rules, State-by-state ethics opinion guide
- AI Product Liability, When AI tools themselves are defective
Partner Sites#
- Legal AI Hallucinations Guide, Comprehensive incident tracker and analysis
- AI Hiring Discrimination Practice Area, Related AI liability claims
- Find an AI Liability Law Firm, Directory of firms handling AI cases
Professional Resources#
Facing AI-Related Ethics Issues?
From Mata v. Avianca sanctions to state bar disciplinary proceedings, legal AI creates unprecedented professional responsibility risks. With 40+ state bars issuing guidance, federal courts requiring AI disclosure, and malpractice claims rising, attorneys and law firms need expert guidance on AI governance, ethics compliance, and risk management. Connect with professionals who understand the intersection of legal ethics, AI technology, and professional liability.
Get Expert Guidance