The AI Insurance Crisis: Uninsurable Risk?#
The insurance industry faces an unprecedented challenge: how to price and cover risks from technology that even its creators cannot fully predict. As AI systems generate outputs that cause real-world harm, defamatory hallucinations, copyright infringement, discriminatory decisions, even deaths, insurers are confronting a fundamental question: can AI risks be insured at all?
Major carriers are retreating from AI coverage just as businesses most need it. The result is a widening protection gap that threatens to slow AI adoption, expose companies to catastrophic losses, and leave AI victims without recourse.
Major Insurers Retreat from AI Coverage#
AIG, Great American, and WR Berkley Seek Exclusions#
In late 2024 and 2025, major insurance carriers petitioned U.S. regulators to exclude AI-related liabilities from corporate coverage.
WR Berkley’s “Absolute AI Exclusion”
WR Berkley proposed some of the broadest measures yet, including exclusions that bar claims related to “any actual or alleged use” of AI, regardless of whether the model was company-owned, third-party, licensed, or embedded in software tools. Berkley introduced what it calls an “absolute AI exclusion” across:
- Directors and Officers (D&O) policies
- Errors and Omissions (E&O) coverage
- Fiduciary liability products
AIG’s Position
In a filing with the Illinois insurance regulator, AIG stated that generative AI was a “wide-ranging technology” and the possibility of events triggering future claims will “likely increase over time.” While AIG told the Financial Times it “has no plans to implement them at this time,” obtaining regulatory approval would preserve the option.
Why Insurers Are Pulling Back
One underwriter described AI as “too much of a black box,” noting that some insurers cover AI-enhanced software but have declined to underwrite risks from large language models (LLMs) like ChatGPT.
Kevin Kalinich, head of cyber risk at Aon, explained the challenge: “We don’t yet have enough capacity for [model] providers. Insurers can’t afford to pay if an AI provider makes a mistake that ends up as a systemic, correlated, aggregated risk.”
The Data Vacuum: Why AI Risks Cannot Be Priced#
Historical Data Does Not Exist#
Traditional insurance pricing relies on actuarial analysis of historical claims data. For AI, that data vacuum is nearly total:
Copyright litigation outcomes unknown: Major cases like NYT v. OpenAI, RIAA v. Suno/Udio, and music publishers v. Anthropic remain in discovery. Theoretical statutory damages run into hundreds of billions of dollars under willful infringement provisions ($150,000 per work).
Product liability theories untested: Courts are only beginning to apply product liability frameworks to AI outputs. The Character.AI wrongful death case (Garcia v. Character Technologies) and OpenAI suicide litigation (Raine v. OpenAI) will establish precedent, but that precedent does not yet exist.
Regulatory landscape shifting: The EU AI Act entered force in August 2024. U.S. state laws proliferate. Compliance costs and enforcement patterns remain speculative.
The Correlation Problem#
Unlike traditional risks, AI failures may be highly correlated. A single model flaw, training data problem, or adversarial attack could affect every deployment simultaneously. As Aon’s Kalinich noted, insurers cannot absorb “systemic, correlated, aggregated risk”, the very profile AI presents.
AI Securities Litigation Explosion#
Class Actions Surge#
AI-related securities class actions have become a dominant litigation category:
Filing Statistics:
- AI-related filings more than doubled from 7 in 2023 to 15 in 2024
- Through mid-2025, 12 additional AI cases filed, on track to exceed 2024 totals
- AI is now among the top three SCA trigger categories (with COVID-19 and SPACs)
Market Cap Losses:
- The Disclosure Dollar Loss Index reached $403 billion in H1 2025, a 56% increase from H2 2024
- The Maximum Dollar Loss Index hit $1.85 trillion, up 154%
- 15 “mega” cases in H1 2025 alone, three times the historical average
Common Allegations:
- “AI washing”: Companies allegedly overstating AI sophistication, effectiveness, or capabilities
- Concealing reliance on manual labor, third-party tools, or non-AI solutions while marketing as “AI-driven”
- Missed projections based on AI development timelines
- Data security incidents tied to AI systems
Notable 2025 Cases:
- Apple shareholders sued over delayed Siri AI upgrades after WWDC 2024 announcements
- Multiple cases alleging biometric data collection through AI systems
- Employment discrimination suits over AI hiring tools
D&O Insurance Implications#
Directors face personal liability exposure when companies make AI-related representations. Standard D&O policies increasingly exclude AI claims entirely, leaving board members personally at risk.
OpenAI’s Insurance Challenge#
$300 Million Coverage for Billions in Exposure#
According to the Financial Times, OpenAI secured approximately $300 million in AI-specific insurance coverage through broker Aon, though some sources dispute the exact figure.
The Coverage Gap:
That coverage is far short of OpenAI’s legal exposure:
- NYT v. OpenAI: Billions in potential copyright damages
- Bartz v. Anthropic proposed settlement: $1.5 billion for 500,000 authors (rejected by court)
- Multiple wrongful death suits following teen suicides
- Over 50 copyright cases across the AI industry
Self-Insurance Considerations:
OpenAI has reportedly explored “self-insurance” options, including:
- Setting aside investor funding to expand coverage
- Creating a “captive” insurance vehicle, a ringfenced structure often used for emerging risks
OpenAI confirmed it has insurance in place and is evaluating different structures but does not currently operate a captive.
Munich Re: The Specialized AI Insurance Pioneer#
aiSure™ Performance Insurance#
Munich Re, a German multinational, has offered AI-specific insurance products since 2018 through its aiSure™ program.
Coverage Model:
Munich Re provides performance-guarantee insurance covering:
- AI vendor financial losses from underperforming AI solutions
- Third-party liability for AI failures
- Copyright infringement risks for generative AI
- Discrimination claims when AI exhibits bias
Rigorous Due Diligence:
Munich Re’s underwriting requires individualized technical assessment:
- Multidisciplinary teams including research scientists validate data science methodologies
- Domain experts (cybersecurity specialists, engineers, medical doctors) evaluate application context
- The process quantifies the probability and severity of model underperformance
- Contract terms are created based on individual AI system characteristics
Key Insight:
Munich Re’s approach demonstrates that AI can be insured, but only through deep technical evaluation and individualized pricing. Blanket coverage based on actuarial averages remains impossible.
The “Silent AI” Problem#
Inadvertent Coverage Creating Exposure#
“Silent AI” refers to AI-driven risks neither explicitly included nor excluded in insurance policies, leaving ambiguous coverage gaps.
The Pattern:
Similar to “silent cyber” exposures that emerged in the 2010s, insurers now face claims under policies never designed for AI risks:
- General liability policies written before LLMs existed
- Professional liability coverage that doesn’t address AI tool usage
- Product liability policies applied to AI “products”
Insurer Response:
To eliminate silent exposure, carriers are adding explicit exclusions, but this shifts risk entirely to policyholders who may not realize their coverage has changed.
Professional Liability Insurance Gaps#
Lawyers and AI Mistakes#
The American Bar Association raised alarm about coverage gaps as lawyers rapidly adopt AI tools:
Adoption Statistics:
- 43% of Am Law 200 firms had dedicated generative AI budgets (February 2024 LexisNexis survey)
- AI tools are used for research, drafting, due diligence, and contract review
Coverage Uncertainty:
- No standardized AI coverage exists for legal malpractice
- Cyber policies may cover some AI risks, but “tremendous variation” exists
- Property policies increasingly use low sublimits (e.g., $500,000 AI claim cap on $10 million policy)
The Mata v. Avianca Problem:
When attorneys rely on AI-hallucinated citations, as in the infamous Mata case, questions arise:
- Is this a “professional service” triggering malpractice coverage?
- Is it a technology failure excluded by cyber carve-outs?
- Is it outside all coverage as an “AI mistake”?
Physicians and Clinical AI#
Healthcare providers face similar uncertainty:
- Over 1,247 AI-enabled medical devices are FDA-authorized
- Malpractice policies typically cover “professional services rendered by or on behalf of” the insured
- When AI makes a diagnostic recommendation the physician follows, coverage depends on policy interpretation
Sweeping Exclusions Emerge#
The Breadth of New AI Exclusions#
Management liability policies are adding AI exclusions that may eliminate coverage for:
- Discrimination cases involving AI résumé screening tools
- Negligence claims tied to AI-driven contract review
- Fiduciary duty allegations that boards failed to oversee AI risks
- Any claim involving “use, deployment, or development” of AI
Sample “Absolute” Exclusion Language:
“Any actual or alleged use, deployment, or development of Artificial Intelligence”
This language could exclude claims even when AI played a minor role in the underlying conduct.
Impact on Businesses#
Companies face three scenarios:
- Reduced coverage: AI-specific exclusions eliminate protection
- Higher premiums: Where coverage exists, costs are rising
- Uninsured exposure: Many organizations simply lack coverage for AI risks
The net effect is to shift risk back onto firms, often without their awareness until a claim is filed.
Real-World Incidents Driving Insurer Concerns#
Financial Losses from AI Failures#
Arup Deepfake Fraud ($25 million): Global engineering firm Arup lost $25 million to criminals who used AI-generated video and audio to impersonate senior executives during a live video conference, authorizing fraudulent transfers.
Air Canada Chatbot: Air Canada’s chatbot invented a bereavement fare discount policy, which the company was forced to honor in Moffatt v. Air Canada, establishing that companies cannot disclaim liability by treating AI as a “separate entity.”
Google AI Overview Defamation ($110 million lawsuit): Google’s AI Overview feature falsely accused a solar company of legal troubles, triggering substantial litigation.
Wrongful Death Claims: Multiple families have sued AI companies after teen suicides allegedly influenced by chatbot interactions. These cases remain in early stages, but potential damages are substantial.
Implications for AI Deployers#
Assessing Your Coverage#
Organizations using AI should immediately:
1. Review Existing Policies
- Check for AI-specific exclusions added in recent renewals
- Identify “silent AI” coverage gaps
- Understand sublimits that may cap AI claim recovery
2. Catalog AI Exposures
- Document all AI systems in use
- Identify highest-risk applications (customer-facing, healthcare, employment)
- Map AI risks to existing coverage categories
3. Engage Brokers Early
- Discuss AI coverage needs before renewal
- Request manuscript endorsements for critical AI risks
- Explore specialty markets (Munich Re, Lloyd’s syndicates)
4. Consider Self-Insurance Structures
- For large organizations, captive insurance vehicles may be appropriate
- Risk retention groups could pool AI exposures across industries
- Build reserves for uninsured AI risks
Contractual Risk Transfer#
When selecting AI vendors:
Indemnification Provisions:
- Seek broad indemnification for AI-generated content
- Ensure indemnification covers copyright, defamation, and discrimination claims
- Avoid caps that leave substantial exposure
Insurance Requirements:
- Require vendors to maintain AI-specific coverage
- Request certificates of insurance
- Consider whether vendor coverage adequately protects your organization
Limitation of Liability:
- Resist caps that limit recovery to subscription fees
- Carve out intellectual property and gross negligence claims
- Ensure caps are proportionate to potential exposure
Implications for the Insurance Industry#
The Capacity Problem#
The insurance industry collectively lacks capacity to cover AI industry exposure:
- Major copyright cases alone could generate liability exceeding total industry surplus
- Correlation risk means a single model flaw could trigger simultaneous claims globally
- Regulatory penalties add unpredictable government-imposed costs
Potential Solutions#
Specialist Coverage: Munich Re’s model, individualized technical due diligence with customized pricing, may be the only viable approach. This limits scale but provides genuine coverage.
Industry Pools: Similar to nuclear or terrorism insurance pools, AI may require industry-wide risk-sharing mechanisms to spread correlated exposure.
Regulatory Frameworks: Clear liability allocation through legislation could make AI risks more predictable and thus more insurable. The EU AI Act’s tiered approach may serve as a model.
Mandatory Insurance: Some jurisdictions may require AI deployers to maintain minimum coverage, creating a market even if voluntary demand is suppressed by cost.
Looking Forward#
The AI insurance crisis reflects a fundamental tension: businesses need to deploy AI to remain competitive, but cannot transfer the associated risks. Insurers cannot price risks they cannot model, and modeling requires claims data that does not yet exist.
Key Questions to Watch:
- Will courts treat AI as products (enabling prediction) or novel instruments (requiring new frameworks)?
- How will major copyright cases resolve, and what damages will they establish?
- Will regulation create predictable liability patterns that enable pricing?
- Can specialty markets scale to meet demand?
Until these questions are answered, the insurance gap will persist. Organizations deploying AI must accept significant self-insured exposure, and plan accordingly.