AI and Cybersecurity: A Two-Sided Liability Coin#
Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?
This emerging standard of care encompasses both the duty to secure AI systems and the potential duty to use AI for security.
The Regulatory Landscape#
NIST AI Risk Management Framework#
The NIST AI Risk Management Framework (AI RMF 1.0) provides the foundational U.S. guidance for managing AI risks, including cybersecurity risks. Key developments include:
AI RMF Generative AI Profile (July 2024) NIST released NIST-AI-600-1, providing specific guidance for identifying and managing risks unique to generative AI systems. This profile helps organizations assess vulnerabilities specific to large language models and other generative systems.
Adversarial Machine Learning Guidance (2025) NIST AI 100-2e2025, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” provides a comprehensive framework for understanding and defending against attacks on AI systems, including:
- Data poisoning - Manipulation of training data to corrupt model behavior
- Model evasion - Inputs designed to cause incorrect outputs while appearing normal
- Model extraction - Stealing proprietary AI models through query-based attacks
- Inference attacks - Extracting private training data from model outputs
Dioptra Testing Software NIST released “Dioptra,” a security testbed enabling organizations to determine which attacks would degrade their AI model performance and quantify the reduction. This supports the AI RMF “Measure” function by providing standardized assessment tools.
Control Overlays for Securing AI Systems (COSAIS) NIST announced plans for new cybersecurity guidelines adapting SP 800-53 security controls specifically for AI systems. Public drafts are expected in fiscal year 2026, with AI-specific controls addressing model integrity, data provenance, and adversarial robustness.
EU AI Act Cybersecurity Requirements#
The EU AI Act, which entered into force on August 1, 2024, imposes explicit cybersecurity requirements on high-risk AI systems.
Article 15: Accuracy, Robustness and Cybersecurity
Article 15 mandates that high-risk AI systems must be designed to achieve “an appropriate level of accuracy, robustness and cybersecurity” throughout their lifecycle.
Specific technical requirements include defenses against:
- Training data manipulation (data poisoning)
- Pre-trained component attacks (model poisoning)
- Adversarial examples (model evasion)
- Confidentiality attacks on model internals
- Model flaws that create exploitable vulnerabilities
Disclosure Obligations
Providers must disclose to deployers “characteristics, capabilities and limitations” of their AI systems, including “any known and foreseeable circumstances” affecting accuracy, robustness, and cybersecurity. Failure to disclose known vulnerabilities creates significant liability exposure.
General-Purpose AI with Systemic Risk
Providers of GPAI models with systemic risk face heightened obligations including:
- Conducting adversarial testing and model evaluations
- Tracking and reporting serious incidents
- Implementing adequate cybersecurity protections for physical infrastructure
- Defending against “accidental model leakage, unauthorised releases, circumvention of safety measures, and cyberattacks”
Cybersecurity Act Alignment
Under Article 42, AI systems certified under EU Cybersecurity Act schemes may be presumed compliant with certain AI Act cybersecurity requirements, creating a potential safe harbor pathway.
New York DFS AI Cybersecurity Guidance#
In October 2024, the New York Department of Financial Services issued Industry Letter 20241016 addressing cybersecurity risks arising from artificial intelligence.
The guidance recognizes AI’s dual nature:
- Benefits: AI can enhance threat detection, improve incident response, and automate security operations
- Risks: AI systems create new attack surfaces and enable more sophisticated threats
Covered entities are expected to assess AI risks within their existing cybersecurity programs and implement appropriate controls.
Emerging Litigation: AI System Security Failures#
Prompt Injection and Trade Secrets#
OpenEvidence Inc. v. Pathway Medical Inc. (2024)
In a first-of-its-kind lawsuit, medical AI startup OpenEvidence sued competitor Pathway Medical, alleging:
- Defendants used fake credentials to access OpenEvidence’s platform
- They deployed prompt injection attacks to extract proprietary system prompts
- The extracted prompts constituted trade secrets under the Defend Trade Secrets Act
This case raises critical questions about:
- Whether prompt engineering techniques constitute protectable trade secrets
- Liability for prompt injection attacks under computer fraud statutes
- The duty to secure AI systems against prompt-based extraction
AI Privacy and Eavesdropping Claims#
Ambriz v. Google, LLC (2025)
A wave of lawsuits alleges that AI-powered chatbots unlawfully intercept customer communications. In February 2025, a court denied Google’s motion to dismiss claims under the California Invasion of Privacy Act (CIPA).
Key Allegations:
- AI chatbots record customer communications without consent
- Recorded data is used to train AI tools
- AI providers are “third parties” listening to communications, not parties to the conversation
This theory, if successful, could impose significant liability on any organization using AI chatbots for customer service or sales.
Biometric AI Settlements#
Meta Texas Settlement (July 2024)
Texas obtained a $1.4 billion settlement, the largest privacy-related payout ever obtained by a single state, over Facebook’s “Tag Suggestions” feature.
The Violation: AI-powered facial recognition analyzed user photos and identified individuals without explicit consent, violating Texas’s Capture or Use of Biometric Identifier Act.
Implications for AI Security: Organizations deploying biometric AI must implement robust consent mechanisms and cannot rely on default opt-in settings.
Federal Enforcement Actions#
SEC Cybersecurity Disclosure#
SolarWinds Case (2023-2025)
The SEC’s landmark enforcement action against SolarWinds and its CISO tested the boundaries of cybersecurity disclosure obligations:
- July 2024: A federal court dismissed most claims, ruling that cybersecurity deficiencies do not constitute “internal accounting controls” failures
- November 2025: The SEC dropped the remaining case with prejudice
Key Takeaway: While companies need not disclose every cybersecurity risk in granular detail, voluntary public statements about security practices (such as website security pages) can create liability if materially misleading.
SEC 2025 Examination Priorities
The SEC announced that AI, cybersecurity, and crypto remain enforcement priorities. “AI washing”, making false claims about AI capabilities, has already resulted in multiple enforcement actions, including settlements with investment advisers who falsely claimed AI-powered investment processes.
FTC Cybersecurity and AI Actions#
Operation AI Comply (September 2024)
The FTC announced enforcement actions against five companies for unfair or deceptive AI practices, including:
- False claims about AI capabilities (DoNotPay’s “AI lawyer”)
- AI-powered business opportunity fraud ($15 million consumer harm)
- Unsubstantiated AI efficacy claims
IntelliVision Consent Order (December 2024)
The FTC issued an order against AI facial recognition provider IntelliVision for claims of AI bias and efficacy, providing insight into how the agency evaluates AI accuracy claims.
FTC Security Recommendations (December 2024)
The FTC’s Office of Technology issued recommendations on AI development security:
- Enforce data retention schedules to minimize attack surfaces
- Limit third-party data sharing
- Encrypt sensitive data throughout the AI pipeline
- Apply “secure by design” principles from initial development
- Delete models and algorithms trained on improperly obtained data (algorithmic disgorgement)
Algorithmic Disgorgement#
The FTC has ordered algorithmic disgorgement in multiple cases, requiring companies to delete AI models trained on improperly collected data:
- Cambridge Analytica (2019)
- Everalbum (2022)
- WeightWatchers (2023)
- Ring (2023)
- Rite Aid (2023)
- Avast (2024)
This remedy means AI security failures can result in deletion of the AI systems themselves, not just monetary penalties.
The Duty to Use AI for Defense#
An Emerging Standard?#
As AI-powered threat detection becomes industry standard, a critical question emerges: Is failing to use AI-based security tools now negligent?
Arguments for an Affirmative Duty:
- Industry Adoption: AI threat detection is becoming ubiquitous in enterprise security
- Capability Gap: AI can analyze threats at speeds and scales impossible for humans
- Regulatory Expectations: NY DFS guidance encourages AI adoption for security benefits
- Reasonable Care Evolution: Standards evolve with technology capabilities
Arguments Against:
- No Explicit Requirement: No regulation mandates AI security tools specifically
- Technology Neutrality: Security outcomes matter more than specific technologies
- Small Business Burden: AI tools may be cost-prohibitive for smaller organizations
- New Attack Surfaces: AI security tools themselves create vulnerabilities
Change Healthcare: A Cautionary Tale#
The 2024 Change Healthcare breach demonstrated how basic security failures:not sophisticated AI attacks, remain the primary threat vector.
The Facts:
- Attackers gained access via compromised credentials for a Citrix remote access portal
- The portal lacked multi-factor authentication
- UnitedHealth’s CEO testified before Congress about these basic failures
The Lesson: While AI can enhance security, it cannot substitute for fundamental hygiene. Organizations face liability for failing to implement basic controls regardless of their AI sophistication.
Vendor Liability for AI Security Vulnerabilities#
Emerging Theories#
Product Liability for AI Security Flaws
As AI systems cause harm due to security vulnerabilities, courts are applying traditional product liability frameworks:
- Manufacturing Defects: AI trained on poisoned data
- Design Defects: Architectures vulnerable to adversarial attacks
- Failure to Warn: Not disclosing known security limitations
Negligence Standards
Lawfare’s analysis notes that courts can apply negligence frameworks to AI developers without waiting for legislation. California’s proposed SB 1047 (vetoed in 2024) would have codified a “duty to take reasonable care” for frontier AI developers.
Contractual Allocation#
When onboarding AI vendors, organizations should assess:
- Indemnification provisions for security failures
- Liability caps and exclusions
- Breach notification obligations
- Requirements for security updates and patches
- Data handling and deletion requirements
The Emerging Standard of Care#
For AI System Operators#
Security by Design
- Implement NIST AI RMF controls appropriate to risk level
- Conduct adversarial testing before deployment
- Plan for ongoing monitoring and updates
Vulnerability Management
- Maintain processes to identify and patch AI-specific vulnerabilities
- Monitor for data poisoning and model drift
- Implement access controls on training pipelines
Incident Response
- Include AI-specific scenarios in incident response plans
- Prepare for model compromise and extraction attacks
- Document response procedures for adversarial examples
Disclosure Obligations
- Understand regulatory disclosure requirements (EU AI Act, SEC rules)
- Avoid misleading public statements about AI security
- Promptly disclose material vulnerabilities
For AI Vendors#
Development Security
- Apply secure development lifecycle to AI systems
- Test for adversarial robustness before release
- Document security properties and limitations
Customer Disclosure
- Provide clear information on security capabilities and limitations
- Disclose known vulnerabilities and attack vectors
- Offer guidance on secure deployment configurations
Ongoing Support
- Provide security updates throughout product lifecycle
- Monitor for emerging vulnerabilities in deployed systems
- Establish clear end-of-support communications
For Security Professionals#
Evaluate AI Security Tools
- Assess whether AI-based threat detection is appropriate for your environment
- Understand that AI tools create new attack surfaces
- Maintain human oversight of AI security decisions
Secure AI Deployments
- Apply organizational security standards to AI systems
- Include AI systems in vulnerability management programs
- Monitor AI systems for anomalous behavior
Document Decisions
- Record rationale for AI security tool adoption (or non-adoption)
- Maintain evidence of security testing and validation
- Preserve audit trails of AI system changes