Skip to main content
  1. AI Legal Ethics by State/
  2. State AI Ethics Rules for Attorneys/

New York AI Ethics Rules for Attorneys

Table of Contents

New York has developed one of the most comprehensive frameworks for AI ethics in legal practice. In April 2024, the New York State Bar Association (NYSBA) released its Task Force on Artificial Intelligence report, and the NYC Bar Association issued Formal Opinion 2024-5. Together, these documents provide extensive guidance for New York attorneys using AI.


Key Guidance Documents
#

NYSBA Task Force Report (April 2024)
#

The nearly 90-page report is the most comprehensive document provided by any state bar association regarding AI use:

Scope: Examines (1) evolution of AI and generative AI; (2) benefits and risks; (3) impact on the legal profession; (4) legislative overview and recommendations; (5) proposed guidelines.

Approval: Adopted by NYSBA House of Delegates on April 6, 2024.

Full Report: NYSBA Task Force Report (PDF)

NYC Bar Formal Opinion 2024-5
#

The NYC Bar Professional Ethics Committee issued formal guidance on generative AI use:

Focus: Ethical obligations of New York lawyers and law firms using generative AI

Approach: General guidance recognizing that AI tools are rapidly evolving

Full Opinion: NYC Bar Opinion 2024-5


The Competence Imperative
#

Provocative Finding
The NYSBA Task Force suggested that “a refusal to use technology that makes legal work more accurate and efficient may be considered a refusal to provide competent legal representation to clients.”

This statement implies that avoiding AI might itself be an ethical violation, a provocative position that goes beyond other state guidance. While not a binding rule, it signals New York’s view that technology competence is essential to modern legal practice.

Implications:

  • Attorneys should develop AI literacy
  • Blanket refusals to use AI may be questioned
  • Understanding AI capabilities and limitations is part of competence

Core Ethical Obligations
#

Competence (Rule 1.1)
#

New York emphasizes that competent AI use requires:

  • Understanding the technology sufficiently to assess its outputs
  • Recognizing limitations including hallucination risk
  • Verification of all outputs before use in client matters
  • Ongoing education as AI capabilities evolve

Confidentiality (Rule 1.6)
#

Privilege Protection
NYSBA specifically warns that AI use must not compromise attorney-client privilege.

Key Concerns:

  • Data input to AI systems may be stored, accessed, or used for training
  • Third-party AI platforms may not have adequate security protections
  • Privilege could be waived if confidential information is improperly disclosed

Protective Measures:

  • Assess AI platform security and data handling policies
  • Consider enterprise solutions with enhanced protections
  • Obtain client consent when inputting confidential information
  • Document confidentiality safeguards

Communication (Rule 1.4)
#

The Task Force advises disclosure of AI use to clients:

  • Inform clients when AI tools are employed in their cases
  • Explain how AI is being used and any limitations
  • Discuss AI use in engagement letters
  • Update clients if AI use changes

Supervision (Rules 5.1, 5.3)
#

Attorneys and firms have supervisory obligations:

  • Handle AI properly - attorneys must ensure paralegals and employees use AI appropriately
  • Establish policies - firms should have written AI use guidelines
  • Train staff - provide education on ethical AI use
  • Review outputs - supervise all AI-generated work product

New York Rules of Professional Conduct Implicated
#

RuleObligationAI Application
Rule 1.1CompetenceUnderstand AI; verify all outputs
Rule 1.4CommunicationDisclose AI use to clients
Rule 1.5FeesReasonable fees; AI efficiency savings
Rule 1.6ConfidentialityProtect privilege; assess AI security
Rule 3.3CandorVerify AI content before court submission
Rule 5.1Partner/Supervisory DutiesEstablish AI policies; supervise
Rule 5.3Nonlawyer AssistanceSupervise AI use by staff
Rule 8.4MisconductNo deceptive AI use

Federal Court AI Orders in New York
#

Southern District of New York
#

Several judges in the SDNY have addressed AI:

Mata v. Avianca (Judge P. Kevin Castel):

  • The landmark AI hallucination case
  • $5,000 sanctions for attorneys who cited fake ChatGPT-generated cases
  • Established that verification is required regardless of AI tool used

United States v. Cohen (Judge Jesse Furman):

  • Declined sanctions where no subjective bad faith found
  • Distinguished that sanctions require more than negligent AI use
  • Still emphasized attorney responsibility for all filings

Eastern District of New York
#

Individual judges may have standing orders. Attorneys should check for AI disclosure requirements when assigned to cases.


NYSBA Four Main Recommendations
#

The Task Force made four key recommendations:

1. Adopt Guidelines
#

NYSBA should adopt AI/Generative AI guidelines and establish a standing committee for periodic updates.

2. Focus on Education
#

Prioritize education over legislation, focus on educating judges, lawyers, law students, and regulators to understand AI so they can apply existing law.

3. Identify Risks for New Regulation
#

Legislatures should identify risks not addressed by existing laws and determine whether AI should be regulated comprehensively or industry-by-industry.

4. Examine the Law’s Role
#

The rapid advancement of AI requires examining law as a governance tool, including expressing social values, protecting against risks, and stabilizing society.


AI & Emerging Technologies Committee
#

Following the Task Force report, NYSBA established the AI & Emerging Technologies Committee (AIETC):

Mission: Examine legal, social, and ethical impact of AI, generative AI, agentic AI, and other emerging technologies on the legal profession

Focus Areas:

  • Access to justice implications
  • Legal regulations
  • Privacy preservation
  • Global community impact

AI Disclosure Requirements in New York
#

New York’s approach emphasizes client communication:

Client Disclosure:

  • Task Force advises disclosing AI use to clients
  • Include AI provisions in engagement letters
  • Discuss limitations and safeguards

Court Disclosure:

  • No uniform state court requirement
  • Individual federal judges may require disclosure
  • Always verify content regardless of disclosure rules

Practical Compliance Steps for New York Attorneys
#

New York AI Compliance Checklist

For Competence:

  1. Develop AI literacy, understand how tools work
  2. Recognize AI is part of modern legal competence
  3. Stay current on AI developments and risks
  4. Consider whether your practice needs AI integration

For Confidentiality: 5. Assess AI platform security and data policies 6. Consider enterprise solutions with enhanced protections 7. Obtain client consent for confidential data input 8. Document your confidentiality safeguards

For Communication: 9. Disclose AI use to clients as recommended 10. Include AI provisions in engagement letters 11. Explain AI limitations and verification processes 12. Update clients if AI use changes

For Verification: 13. Independently verify all citations 14. Check quotes against original sources 15. Shepardize/KeyCite all authority 16. Review for logical consistency and accuracy

For Supervision: 17. Establish written firm AI policies 18. Train all lawyers and staff on AI use 19. Review AI-generated work product 20. Ensure compliance with ethical obligations


Mata v. Avianca: The Landmark Case
#

The case that launched the AI ethics conversation occurred in the SDNY:

Facts:

  • Attorneys used ChatGPT for legal research
  • ChatGPT generated six completely fabricated cases
  • When asked if citations were real, ChatGPT affirmed they were

Sanctions:

  • $5,000 fine against attorneys and firm
  • Required letters to falsely-cited judges
  • Found subjective bad faith for failing to verify

Key Lesson: Judge Castel emphasized that “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” but “existing rules impose upon lawyers a duty to ensure the accuracy of their filings.”


Frequently Asked Questions
#

Does New York require disclosure of AI use to clients?

The NYSBA Task Force recommends disclosing AI use to clients when AI tools are employed in their cases. While not an absolute rule, disclosure is advisable and should be addressed in engagement letters. NYC Bar Opinion 2024-5 provides additional guidance on communication obligations.

Is it unethical to refuse to use AI in New York?

The NYSBA Task Force provocatively suggested that “a refusal to use technology that makes legal work more accurate and efficient may be considered a refusal to provide competent legal representation.” While not binding, this signals that blanket AI avoidance may be questioned. Attorneys should develop AI literacy even if they don’t use AI directly.

What happened in Mata v. Avianca?

Attorneys in this SDNY case cited six completely fabricated cases generated by ChatGPT. When the court questioned the citations, attorneys initially defended them. Judge Castel imposed $5,000 in sanctions, finding subjective bad faith for failing to verify AI outputs. The case established that attorneys must verify all AI-generated content.

How should New York attorneys protect client confidentiality with AI?

NYSBA emphasizes that AI must not compromise attorney-client privilege. Before using AI with client information: assess platform security, review data handling policies, consider enterprise solutions, obtain client consent, and document safeguards. Be especially cautious with third-party consumer AI tools.

Resources
#


Questions About AI Ethics Compliance in New York?

New York's comprehensive framework, including the provocative suggestion that refusing AI may itself raise competence concerns, requires careful attention. Understanding NYSBA guidance and the Mata v. Avianca precedent is essential for New York attorneys integrating AI into their practice.

Consult a Legal Ethics Attorney

Related

California AI Ethics Rules for Attorneys

California was the first state to approve regulatory guidance for attorney use of generative AI, releasing its “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law” in November 2023. The California State Bar has characterized this guidance as “guiding principles rather than best practices,” reflecting the rapidly evolving nature of AI technology.

Florida AI Ethics Rules for Attorneys

On January 19, 2024, the Florida Bar Board of Governors unanimously approved Ethics Opinion 24-1, providing guidance on the ethical use of generative artificial intelligence in legal practice. Florida was among the first states to issue formal AI ethics guidance, and Opinion 24-1 has been recognized as a model for other jurisdictions.

Pennsylvania AI Ethics Rules for Attorneys

In May 2024, the Pennsylvania Bar Association and Philadelphia Bar Association jointly released Formal Opinion 2024-200, providing comprehensive guidance on ethical issues regarding attorney use of artificial intelligence. This joint opinion reflects collaboration between the state’s two major bar associations and addresses the full range of AI ethics considerations.

Texas AI Ethics Rules for Attorneys

In February 2025, the Professional Ethics Committee for the State Bar of Texas issued Opinion 705, providing comprehensive guidance on Texas attorneys’ use of generative artificial intelligence. This opinion builds on the work of the Taskforce for Responsible AI in the Law (TRAIL), an initiative launched by the Texas State Bar’s Immediate Past President, Cindy Tisdale.