Skip to main content
  1. AI Standard of Care by Industry/

AI Cybersecurity Standard of Care

Table of Contents

AI and Cybersecurity: A Two-Sided Liability Coin
#

Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

This emerging standard of care encompasses both the duty to secure AI systems and the potential duty to use AI for security.

The Regulatory Landscape
#

NIST AI Risk Management Framework
#

The NIST AI Risk Management Framework (AI RMF 1.0) provides the foundational U.S. guidance for managing AI risks, including cybersecurity risks. Key developments include:

AI RMF Generative AI Profile (July 2024) NIST released NIST-AI-600-1, providing specific guidance for identifying and managing risks unique to generative AI systems. This profile helps organizations assess vulnerabilities specific to large language models and other generative systems.

Adversarial Machine Learning Guidance (2025) NIST AI 100-2e2025, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” provides a comprehensive framework for understanding and defending against attacks on AI systems, including:

  • Data poisoning - Manipulation of training data to corrupt model behavior
  • Model evasion - Inputs designed to cause incorrect outputs while appearing normal
  • Model extraction - Stealing proprietary AI models through query-based attacks
  • Inference attacks - Extracting private training data from model outputs

Dioptra Testing Software NIST released “Dioptra,” a security testbed enabling organizations to determine which attacks would degrade their AI model performance and quantify the reduction. This supports the AI RMF “Measure” function by providing standardized assessment tools.

Control Overlays for Securing AI Systems (COSAIS) NIST announced plans for new cybersecurity guidelines adapting SP 800-53 security controls specifically for AI systems. Public drafts are expected in fiscal year 2026, with AI-specific controls addressing model integrity, data provenance, and adversarial robustness.

EU AI Act Cybersecurity Requirements
#

The EU AI Act, which entered into force on August 1, 2024, imposes explicit cybersecurity requirements on high-risk AI systems.

Article 15: Accuracy, Robustness and Cybersecurity

Article 15 mandates that high-risk AI systems must be designed to achieve “an appropriate level of accuracy, robustness and cybersecurity” throughout their lifecycle.

Specific technical requirements include defenses against:

  • Training data manipulation (data poisoning)
  • Pre-trained component attacks (model poisoning)
  • Adversarial examples (model evasion)
  • Confidentiality attacks on model internals
  • Model flaws that create exploitable vulnerabilities

Disclosure Obligations

Providers must disclose to deployers “characteristics, capabilities and limitations” of their AI systems, including “any known and foreseeable circumstances” affecting accuracy, robustness, and cybersecurity. Failure to disclose known vulnerabilities creates significant liability exposure.

General-Purpose AI with Systemic Risk

Providers of GPAI models with systemic risk face heightened obligations including:

  • Conducting adversarial testing and model evaluations
  • Tracking and reporting serious incidents
  • Implementing adequate cybersecurity protections for physical infrastructure
  • Defending against “accidental model leakage, unauthorised releases, circumvention of safety measures, and cyberattacks”

Cybersecurity Act Alignment

Under Article 42, AI systems certified under EU Cybersecurity Act schemes may be presumed compliant with certain AI Act cybersecurity requirements, creating a potential safe harbor pathway.

New York DFS AI Cybersecurity Guidance
#

In October 2024, the New York Department of Financial Services issued Industry Letter 20241016 addressing cybersecurity risks arising from artificial intelligence.

The guidance recognizes AI’s dual nature:

  • Benefits: AI can enhance threat detection, improve incident response, and automate security operations
  • Risks: AI systems create new attack surfaces and enable more sophisticated threats

Covered entities are expected to assess AI risks within their existing cybersecurity programs and implement appropriate controls.

Emerging Litigation: AI System Security Failures
#

Prompt Injection and Trade Secrets
#

OpenEvidence Inc. v. Pathway Medical Inc. (2024)

In a first-of-its-kind lawsuit, medical AI startup OpenEvidence sued competitor Pathway Medical, alleging:

  • Defendants used fake credentials to access OpenEvidence’s platform
  • They deployed prompt injection attacks to extract proprietary system prompts
  • The extracted prompts constituted trade secrets under the Defend Trade Secrets Act

This case raises critical questions about:

  • Whether prompt engineering techniques constitute protectable trade secrets
  • Liability for prompt injection attacks under computer fraud statutes
  • The duty to secure AI systems against prompt-based extraction

AI Privacy and Eavesdropping Claims
#

Ambriz v. Google, LLC (2025)

A wave of lawsuits alleges that AI-powered chatbots unlawfully intercept customer communications. In February 2025, a court denied Google’s motion to dismiss claims under the California Invasion of Privacy Act (CIPA).

Key Allegations:

  • AI chatbots record customer communications without consent
  • Recorded data is used to train AI tools
  • AI providers are “third parties” listening to communications, not parties to the conversation

This theory, if successful, could impose significant liability on any organization using AI chatbots for customer service or sales.

Biometric AI Settlements
#

Meta Texas Settlement (July 2024)

Texas obtained a $1.4 billion settlement, the largest privacy-related payout ever obtained by a single state, over Facebook’s “Tag Suggestions” feature.

The Violation: AI-powered facial recognition analyzed user photos and identified individuals without explicit consent, violating Texas’s Capture or Use of Biometric Identifier Act.

Implications for AI Security: Organizations deploying biometric AI must implement robust consent mechanisms and cannot rely on default opt-in settings.

Federal Enforcement Actions
#

SEC Cybersecurity Disclosure
#

SolarWinds Case (2023-2025)

The SEC’s landmark enforcement action against SolarWinds and its CISO tested the boundaries of cybersecurity disclosure obligations:

  • July 2024: A federal court dismissed most claims, ruling that cybersecurity deficiencies do not constitute “internal accounting controls” failures
  • November 2025: The SEC dropped the remaining case with prejudice

Key Takeaway: While companies need not disclose every cybersecurity risk in granular detail, voluntary public statements about security practices (such as website security pages) can create liability if materially misleading.

SEC 2025 Examination Priorities

The SEC announced that AI, cybersecurity, and crypto remain enforcement priorities. “AI washing”, making false claims about AI capabilities, has already resulted in multiple enforcement actions, including settlements with investment advisers who falsely claimed AI-powered investment processes.

FTC Cybersecurity and AI Actions
#

Operation AI Comply (September 2024)

The FTC announced enforcement actions against five companies for unfair or deceptive AI practices, including:

  • False claims about AI capabilities (DoNotPay’s “AI lawyer”)
  • AI-powered business opportunity fraud ($15 million consumer harm)
  • Unsubstantiated AI efficacy claims

IntelliVision Consent Order (December 2024)

The FTC issued an order against AI facial recognition provider IntelliVision for claims of AI bias and efficacy, providing insight into how the agency evaluates AI accuracy claims.

FTC Security Recommendations (December 2024)

The FTC’s Office of Technology issued recommendations on AI development security:

  • Enforce data retention schedules to minimize attack surfaces
  • Limit third-party data sharing
  • Encrypt sensitive data throughout the AI pipeline
  • Apply “secure by design” principles from initial development
  • Delete models and algorithms trained on improperly obtained data (algorithmic disgorgement)

Algorithmic Disgorgement
#

The FTC has ordered algorithmic disgorgement in multiple cases, requiring companies to delete AI models trained on improperly collected data:

  • Cambridge Analytica (2019)
  • Everalbum (2022)
  • WeightWatchers (2023)
  • Ring (2023)
  • Rite Aid (2023)
  • Avast (2024)

This remedy means AI security failures can result in deletion of the AI systems themselves, not just monetary penalties.

The Duty to Use AI for Defense
#

An Emerging Standard?
#

As AI-powered threat detection becomes industry standard, a critical question emerges: Is failing to use AI-based security tools now negligent?

Arguments for an Affirmative Duty:

  1. Industry Adoption: AI threat detection is becoming ubiquitous in enterprise security
  2. Capability Gap: AI can analyze threats at speeds and scales impossible for humans
  3. Regulatory Expectations: NY DFS guidance encourages AI adoption for security benefits
  4. Reasonable Care Evolution: Standards evolve with technology capabilities

Arguments Against:

  1. No Explicit Requirement: No regulation mandates AI security tools specifically
  2. Technology Neutrality: Security outcomes matter more than specific technologies
  3. Small Business Burden: AI tools may be cost-prohibitive for smaller organizations
  4. New Attack Surfaces: AI security tools themselves create vulnerabilities

Change Healthcare: A Cautionary Tale
#

The 2024 Change Healthcare breach demonstrated how basic security failures:not sophisticated AI attacks, remain the primary threat vector.

The Facts:

  • Attackers gained access via compromised credentials for a Citrix remote access portal
  • The portal lacked multi-factor authentication
  • UnitedHealth’s CEO testified before Congress about these basic failures

The Lesson: While AI can enhance security, it cannot substitute for fundamental hygiene. Organizations face liability for failing to implement basic controls regardless of their AI sophistication.

Vendor Liability for AI Security Vulnerabilities
#

Emerging Theories
#

Product Liability for AI Security Flaws

As AI systems cause harm due to security vulnerabilities, courts are applying traditional product liability frameworks:

  • Manufacturing Defects: AI trained on poisoned data
  • Design Defects: Architectures vulnerable to adversarial attacks
  • Failure to Warn: Not disclosing known security limitations

Negligence Standards

Lawfare’s analysis notes that courts can apply negligence frameworks to AI developers without waiting for legislation. California’s proposed SB 1047 (vetoed in 2024) would have codified a “duty to take reasonable care” for frontier AI developers.

Contractual Allocation
#

When onboarding AI vendors, organizations should assess:

  • Indemnification provisions for security failures
  • Liability caps and exclusions
  • Breach notification obligations
  • Requirements for security updates and patches
  • Data handling and deletion requirements

The Emerging Standard of Care
#

For AI System Operators
#

  1. Security by Design

    • Implement NIST AI RMF controls appropriate to risk level
    • Conduct adversarial testing before deployment
    • Plan for ongoing monitoring and updates
  2. Vulnerability Management

    • Maintain processes to identify and patch AI-specific vulnerabilities
    • Monitor for data poisoning and model drift
    • Implement access controls on training pipelines
  3. Incident Response

    • Include AI-specific scenarios in incident response plans
    • Prepare for model compromise and extraction attacks
    • Document response procedures for adversarial examples
  4. Disclosure Obligations

    • Understand regulatory disclosure requirements (EU AI Act, SEC rules)
    • Avoid misleading public statements about AI security
    • Promptly disclose material vulnerabilities

For AI Vendors
#

  1. Development Security

    • Apply secure development lifecycle to AI systems
    • Test for adversarial robustness before release
    • Document security properties and limitations
  2. Customer Disclosure

    • Provide clear information on security capabilities and limitations
    • Disclose known vulnerabilities and attack vectors
    • Offer guidance on secure deployment configurations
  3. Ongoing Support

    • Provide security updates throughout product lifecycle
    • Monitor for emerging vulnerabilities in deployed systems
    • Establish clear end-of-support communications

For Security Professionals
#

  1. Evaluate AI Security Tools

    • Assess whether AI-based threat detection is appropriate for your environment
    • Understand that AI tools create new attack surfaces
    • Maintain human oversight of AI security decisions
  2. Secure AI Deployments

    • Apply organizational security standards to AI systems
    • Include AI systems in vulnerability management programs
    • Monitor AI systems for anomalous behavior
  3. Document Decisions

    • Record rationale for AI security tool adoption (or non-adoption)
    • Maintain evidence of security testing and validation
    • Preserve audit trails of AI system changes

Resources
#

Related

AI Supply Chain & Logistics Liability

AI in Supply Chain: Commercial Harm at Scale # Artificial intelligence has transformed supply chain management. The global AI in supply chain market has grown from $5.05 billion in 2023 to approximately $7.15 billion in 2024, with projections reaching $192.51 billion by 2034, a 42.7% compound annual growth rate. AI-driven inventory optimization alone represents a $5.9 billion market in 2024, expected to reach $31.9 billion by 2034.

AI in Pharmaceutical Drug Discovery Liability

AI in Drug Discovery: The New Liability Frontier # Artificial intelligence is transforming pharmaceutical development at unprecedented scale. The AI drug discovery market has grown to approximately $2.5-7 billion in 2025, with projections reaching $16-134 billion by 2034 depending on the analysis. AI-discovered molecules reportedly achieve an 80-90% success rate in Phase I trials, substantially higher than traditional discovery methods.

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.