Skip to main content
  1. AI Standard of Care by Industry/

Legal AI Standard of Care

Table of Contents

The legal profession faces unique standard of care challenges as AI tools become ubiquitous in practice. From legal research to document review to contract drafting, AI is transforming how lawyers work, and creating new liability risks. Since the landmark Mata v. Avianca sanctions in June 2023, at least 200+ AI ethics incidents have been documented in legal filings, and every major bar association has issued guidance.

$5,000
Sanctions
Mata v. Avianca (June 2023)
40+
State Bars
With AI ethics guidance
6+
Judges
With AI disclosure orders
ABA 512
July 2024
First ABA AI ethics opinion

The Hallucination Problem: Mata v. Avianca and Its Aftermath
#

The most dramatic AI failures in legal practice involve large language models that confidently fabricate case citations, statutes, and legal principles that do not exist.

Mata v. Avianca, Inc. (S.D.N.Y. 2023)
#

In February 2022, Roberto Mata filed a personal injury lawsuit against Avianca Airlines, alleging injury from a metal serving cart during a flight. His attorneys, Peter LoDuca and Steven A. Schwartz of Levidow, Levidow & Oberman P.C., used ChatGPT to prepare a legal motion, which contained six fabricated case citations with fictional airlines, invented quotations, and nonexistent judges.

Timeline:

  • Avianca’s counsel notified the Court they could not locate several cited cases
  • Judge P. Kevin Castel ordered the attorneys to show cause why they should not be sanctioned
  • Attorney Schwartz testified he was “operating under the false perception that this website could not possibly be fabricating cases on its own”
June 22, 2023 Sanctions Order
Judge Castel imposed $5,000 in sanctions and required the attorneys to send letters to each judge falsely identified as the author of fabricated opinions, attaching the sanctions order, hearing transcript, and a copy of the fake “opinion.” Judge Castel described one of the AI-generated legal analyses as “gibberish.”

Post-Mata Incidents (2023-2025)
#

The Mata case opened the floodgates for AI-related sanctions:

DateCase/JurisdictionIssueOutcome
Oct 2024S.D. Cal.Attorney cited 3 nonexistent cases$15,000 sanctions recommended (Jan 2025)
July 2025S.D. Miss. (Judge Wingate)AI-generated TRO with false parties, misquoted lawsOrder vacated, investigation
July 2025D.N.J. (Judge Neals)Law clerk used ChatGPT without authorizationOrder corrected, clerk disciplined
2024-2025Multiple statesBar disciplinary proceedingsVarious admonishments

Federal Judges’ AI Misuse (2025)
#

In a stunning development, federal judges themselves were caught using AI improperly. In July 2025:

  • Judge Henry T. Wingate (S.D. Miss.) issued a TRO naming plaintiffs and defendants who weren’t parties to the case, misquoting state law, and referencing four individuals who don’t appear in the case
  • Judge Julien Xavier Neals (D.N.J.) disclosed that a temporary assistant used ChatGPT “without authorization, without disclosure, and contrary to chambers policy”

Both incidents prompted Senator Chuck Grassley to formally call on the federal judiciary to regulate AI use.


ABA and State Bar Ethics Guidance
#

ABA Formal Opinion 512 (July 29, 2024)
#

The American Bar Association Standing Committee on Ethics and Professional Responsibility released its first formal opinion on generative AI, establishing the baseline national framework.

Core requirements under Opinion 512:

  • Competence (Rule 1.1): Lawyers must understand how AI functions and possess skill to use it effectively
  • Confidentiality (Rule 1.6): Client information must be protected when using third-party AI tools
  • Communication (Rule 1.4): Clients should be informed when AI use materially affects representation
  • Reasonable Fees (Rule 1.5): Billing must reflect actual time; cannot bill full attorney rates for AI-assisted tasks
  • Supervision (Rules 5.1, 5.3): Lawyers must supervise AI use by subordinates and staff
Key ABA Holding
“Lawyers using GAI must fully consider their applicable ethical obligations.” AI cannot substitute for independent professional judgment, and all AI outputs must be verified before filing or delivery.

State Bar Ethics Opinions (2024-2025)
#

Over 40 state bars have now issued AI ethics guidance. Key opinions include:

StateOpinionKey Provisions
CaliforniaNov 2023First state guidance; disclosure when AI materially affects representation
FloridaOpinion 24-1 (Jan 2024)Informed consent required before disclosing confidential info to third-party AI
New YorkOpinion 2024-5General guidance; notes tools are “rapidly evolving”
TexasOpinion 705 (Feb 2025)Attorneys cannot “blindly rely” on AI; must verify all outputs
North Carolina2024 FEO 1Lawyers “may not abrogate their responsibilities” by relying on AI
KentuckyKBA E-457 (Mar 2024)No disclosure required for routine AI research unless outsourced
New Jersey2024No per se disclosure requirement unless it affects informed decisions
D.C.Opinion 388Comprehensive guidance on AI in client matters

Common Themes Across Jurisdictions
#

  1. No blanket disclosure requirement, Most states don’t require disclosure of all AI use to clients
  2. Material use may require disclosure, When AI use materially affects representation or involves confidential information
  3. Court disclosure, Attorneys must comply with any court rules requiring AI certification
  4. Informed consent for confidential data, Lawyers should obtain consent before inputting confidential information into third-party AI tools
  5. Supervision obligation, Lawyers remain responsible for supervising AI outputs as they would supervise non-lawyer assistants

Federal and State Court AI Disclosure Rules
#

Standing Orders from Individual Judges
#

Following Mata v. Avianca, numerous federal judges issued standing orders requiring AI disclosure:

Judge Brantley Starr (N.D. Tex.), First to require counsel to file a certificate “attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy.”

Judge Stephen Alexander Vaden (Ct. Int’l Trade), Requires disclosure of:

  • Any generative AI program used
  • All portions of text drafted with AI assistance
  • Certification that use did not disclose confidential information

Judge Michael Baylson (E.D. Pa.), Requires disclosure of any type of AI, not just generative AI.

District-Wide Rules
#

DistrictRule
E.D. Tex.Local rules prohibit generative AI for pro se litigants; counsel must verify AI-generated content
E.D. Mo.Generative AI prohibition applies only to pro se litigants

Sanctions for Non-Compliance
#

Violations can trigger:

  • Economic sanctions
  • Stricken pleadings
  • Contempt findings
  • Dismissal of the suit

The Verification Duty
#

Courts and bar associations have converged on a clear standard: attorneys must verify all AI-generated legal research.

What Verification Requires
#

  • Confirming cited cases exist, Check that every citation refers to a real case
  • Verifying quotes are accurate, AI frequently fabricates quotations
  • Checking legal principles, Ensure statements of law are correct
  • Shepardizing/KeyCiting, Confirm precedent is still good law
  • Reviewing context, Ensure AI understood the jurisdictional and factual context

Why “The AI Told Me To” Fails
#

Courts consistently hold that attorneys bear ultimate responsibility for work product. The attorney, not the AI, is the licensed professional with duties to the court and client. Relying on AI without verification is no different than failing to verify work performed by a paralegal or junior associate.


Confidentiality and Privilege Risks
#

Third-Party AI Tools
#

When attorneys input client information into third-party AI systems (ChatGPT, Claude, etc.), they may be:

  • Disclosing confidential information to unauthorized parties
  • Waiving attorney-client privilege
  • Violating data protection obligations
Confidentiality Breach Risk
Most consumer AI tools retain user inputs for training. Attorneys must ensure any AI platform used has appropriate data handling agreements, or obtain client consent before inputting any confidential information.

Best Practices for Confidentiality
#

  1. Use enterprise versions with data retention agreements
  2. Avoid inputting privileged communications, litigation strategy, or client identities
  3. Anonymize or redact sensitive information where possible
  4. Obtain informed consent before using third-party AI on client matters
  5. Document AI use in engagement letters and file memos

Document Review and E-Discovery
#

AI-assisted document review (Technology-Assisted Review or TAR) is now standard in large-scale litigation.

Proportionality Analysis
#

Courts applying Federal Rule of Civil Procedure 26(b)(1) expect parties to use efficient methods for discovery. This increasingly includes AI-assisted review.

Validation Requirements
#

Leading practices include:

  • Statistical sampling to validate AI classifications
  • Quality control protocols for edge cases
  • Documentation of training and validation processes
  • Expert testimony capabilities regarding methodology

Failure Modes
#

Liability may arise from:

  • Over-reliance on AI without adequate quality control
  • Failure to update AI models as case understanding evolves
  • Insufficient training data leading to systematic gaps
  • Bias in AI systems affecting protected categories

Contract Drafting and Legal Automation#

As AI drafts more legal documents, verification requirements intensify.

Attorney Responsibility
#

  • Attorneys remain responsible for all AI-generated work product
  • “Form document” defenses may not apply when AI customizes text
  • Malpractice insurers are updating policies to address AI risks

Common AI Drafting Errors
#

  • Inconsistent defined terms
  • Missing provisions required by jurisdiction
  • Conflicting clauses within the same document
  • Inappropriate terms imported from unrelated contexts

Law Firm AI Governance
#

Policy Components
#

Effective law firm AI policies should address:

ComponentDescription
Approved toolsList of vetted AI platforms with data agreements
Prohibited usesActivities that cannot be performed with AI
Verification protocolsRequired steps before submitting AI-assisted work
Training requirementsMandatory training for attorneys and staff
Disclosure rulesWhen to disclose AI use to clients and courts
Billing guidelinesHow to ethically bill for AI-assisted work

Supervision Obligations
#

Under Rules 5.1 and 5.3, supervising attorneys must:

  • Ensure subordinates understand AI verification requirements
  • Review AI-assisted work product before filing
  • Establish systems for competent AI use
  • Monitor compliance with firm AI policies

Federal Judiciary Interim Guidance (July 2025)
#

The Administrative Office of the U.S. Courts distributed interim AI guidance developed by a task force formed in early 2025.

Key provisions:

  • Allows AI use with guardrails
  • Cautions against delegating “core judicial functions to AI, including decision-making or case adjudication”
  • Recommends “extreme caution” for novel legal questions
  • Requires independent verification of all outputs
  • States that “users are accountable for all work performed with the assistance of AI”

Frequently Asked Questions
#

Do I have to disclose to clients that I'm using AI?

It depends on your jurisdiction and how you’re using AI. Most state bar opinions don’t require disclosure of all AI use. However, disclosure may be required when: (1) AI use materially affects representation, (2) you’re inputting confidential information into third-party tools (requiring informed consent), or (3) the client should know AI is being used to make informed decisions about their representation. When in doubt, disclose.

Can I be sanctioned for citing AI-generated cases that don't exist?

Yes. Mata v. Avianca established that citing fabricated cases subjects attorneys to sanctions. Courts have imposed penalties ranging from $5,000 to $15,000, required letters of apology to misidentified judges, and initiated bar disciplinary proceedings. The duty to verify legal research applies regardless of whether AI generated it.

What does ABA Formal Opinion 512 require?

ABA Formal Opinion 512 (July 2024) requires lawyers using generative AI to: (1) be competent in understanding how AI functions, (2) protect client confidentiality when using third-party AI tools, (3) communicate with clients when AI use materially affects representation, (4) charge reasonable fees that reflect actual work, and (5) supervise AI use by subordinates and staff. It does not prohibit AI use but requires informed, responsible engagement.

Can I input confidential client information into ChatGPT or other AI tools?

Not without significant safeguards. Consumer AI tools may retain inputs for training, potentially disclosing confidential information and waiving privilege. Best practices: use enterprise versions with data retention agreements, anonymize sensitive information, or obtain explicit informed consent from clients before inputting any confidential data into third-party AI systems.

Do any courts require AI disclosure in filings?

Yes. Multiple federal judges have issued standing orders requiring attorneys to certify either that no AI was used in drafting filings, or that all AI-generated content was verified for accuracy. Judge Brantley Starr (N.D. Tex.) was the first; others include Judge Vaden (Ct. Int’l Trade) and Judge Baylson (E.D. Pa.). Some districts have adopted district-wide rules. Check local rules and standing orders in your jurisdiction.

How should I bill for AI-assisted work?

Under ABA Formal Opinion 512 and most state bar guidance, you cannot bill full attorney rates for time “saved” by AI. If AI drafts a document in 5 minutes that would have taken 2 hours manually, you can only bill for actual time spent, including the time to review and verify the AI output. Some firms are developing hybrid models, but the principle is clear: billing must reflect actual work performed.

Related Resources#

On This Site
#

Partner Sites
#

Professional Resources
#


Facing AI-Related Ethics Issues?

From Mata v. Avianca sanctions to state bar disciplinary proceedings, legal AI creates unprecedented professional responsibility risks. With 40+ state bars issuing guidance, federal courts requiring AI disclosure, and malpractice claims rising, attorneys and law firms need expert guidance on AI governance, ethics compliance, and risk management. Connect with professionals who understand the intersection of legal ethics, AI technology, and professional liability.

Get Expert Guidance

Related

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI Defamation and Hallucination Liability

The New Frontier of Defamation Law # Courts are now testing what attorneys describe as a “new frontier of defamation law” as AI systems increasingly generate false, damaging statements about real people. When ChatGPT falsely accused a radio host of embezzlement, when Bing confused a veteran with a convicted terrorist, when Meta AI claimed a conservative activist participated in the January 6 riot, these weren’t glitches. They represent a fundamental challenge to defamation law built on human publishers and human intent.