Skip to main content
  1. AI Legal Resources/

AI Professional Liability Insurance Coverage

Table of Contents
Key Takeaways
  • Most professionals don’t know if their malpractice insurance covers AI-related claims, and increasingly, it doesn’t
  • Major carriers (AIG, Berkley, Hamilton) are actively rolling out AI exclusions
  • Verisk’s 2026 standardized exclusions could reshape market-wide coverage overnight
  • New AI-specific policies are emerging (like Armilla’s Lloyd’s-backed coverage), but adoption is limited
  • Action required: Ask your carrier directly about AI coverage before renewal, don’t assume

The Growing AI Coverage Gap
#

Professional liability insurance was designed for a world where humans made decisions and mistakes. As AI tools increasingly participate in professional services, from legal research to medical diagnosis to financial advice, a dangerous gap is emerging between the risks professionals face and the coverage they assume they have.

The core problem: most professionals using AI tools today don’t know whether their malpractice insurance will cover AI-related claims. And increasingly, the answer is no.

Looking for AI-Specific Coverage Solutions?
For detailed analysis of emerging AI insurance products, coverage grants, exclusions, claims-made issues, and practical claim scenarios, see our comprehensive guide: AI-Specific Professional Liability Insurance.

The Scope of the Problem
#

AI Adoption Has Outpaced Insurance Frameworks
#

  • 60% of physicians were using AI in clinical practice as of 2024, according to an American Medical Association survey
  • 43% of Am Law 200 firms had dedicated budgets for generative AI tools as of February 2024
  • 88% of companies use AI for initial candidate screening in hiring, per the World Economic Forum

Meanwhile, traditional professional liability policies were written before AI existed. They typically cover “professional services” provided by licensed practitioners making human judgments. When an AI tool generates a hallucinated case citation, misses a cancer on imaging, or recommends a discriminatory hiring decision, does that constitute a “professional service” covered under traditional policies?

Many insurers are betting the answer is no.

How Insurers Are Responding
#

The Rise of AI Exclusions
#

Insurance carriers are rapidly rolling out AI-related exclusions that strip coverage from claims involving AI tools, automated decision-making, or generative platforms. This trend accelerated dramatically in 2024-2025.

Berkley’s “Absolute” AI Exclusion (2025)

W.R. Berkley introduced the first so-called “Absolute” AI exclusion in its D&O, E&O, and Fiduciary Liability products. The exclusion purports to broadly eliminate coverage for “any actual or alleged use, deployment, or development of Artificial Intelligence.”

The endorsement enumerates specific applications including:

  • AI-generated content
  • Failure to detect AI-produced materials
  • Inadequate AI governance
  • Chatbot communications
  • Regulatory actions related to AI oversight

Hamilton’s Generative AI Exclusion

Hamilton Insurance Group’s professional liability exclusion removes coverage for any claim, wrongful act, damages, or defense costs “based upon, arising out of, or in any way involving any actual or alleged use of generative artificial intelligence by the insured.”

For businesses incorporating generative AI into products or services, this could effectively eliminate coverage entirely.

Major Carriers Seeking Regulatory Approval

According to the Financial Times, major carriers including AIG, Great American, and WR Berkley have formally asked U.S. state regulators for permission to exclude AI-related liabilities from commercial insurance policies. As one underwriter explained, AI models operate as “too much of a black box” to price accurately.

Verisk’s Standardized Exclusions (2026)

Verisk, one of the largest creators of standardized policy forms in the U.S. insurance market, plans to introduce new general liability exclusions for generative AI starting in January 2026. Because insurers nationwide often adopt Verisk templates, this change could rapidly reshape market-wide coverage.

Sub-Limits and Coverage Caps
#

Even where AI isn’t explicitly excluded, insurers are limiting exposure through sub-limits. A policy with a $10 million face amount may cap AI-related losses at $500,000, creating significant uninsured exposure for firms that rely heavily on AI tools.

Industry-Specific Concerns
#

Legal Profession#

ALAS Warning to Law Firms

Attorneys’ Liability Assurance Society (ALAS), the country’s largest lawyer-owned mutual insurer covering major law firms, issued a bulletin titled “ChatGPT:Not Ready for Prime Time” warning policyholders that:

  • Use of generative AI could result in legal malpractice claims
  • Coverage under professional liability policies may not apply
  • Confidentiality risks extend beyond the firm if AI providers experience breaches

ALAS Senior Vice President Mary Beth Robinson noted that “if there is a cyber intrusion [into OpenAI or ChatGPT], not only will that data potentially be lost to threat actors, but they could conceivably also obtain the firm’s searches… [gaining] access into the mind of a lawyer.”

Coverage Gap Analysis

The February-March 2025 ABA Journal reported that lawyers looking for comprehensive AI insurance policies are “out of luck.” Key concerns include:

  • AI tool use may not satisfy definitions of “professional service”
  • Sanctions for AI-generated fake citations may not constitute covered claims
  • Traditional policies weren’t designed for AI-specific failure modes like hallucinations

Healthcare
#

The Liability Distribution Question

For physicians using AI diagnostic and treatment tools, liability distribution remains unsettled. Current malpractice policies don’t specify AI coverage, and insurers don’t typically ask physicians to list all technologies they use.

Potential gaps include:

  • AI algorithm errors that physicians rely upon
  • Failure to override incorrect AI recommendations
  • Using AI tools outside their approved scope

The Federation of State Medical Boards suggested in April 2024 that member boards should hold clinicians, not AI makers, liable when technology makes errors. This places the coverage burden squarely on physician malpractice policies that may not adequately address AI risks.

One Model for AI Coverage

Digital Diagnostics, maker of the first FDA-cleared autonomous diagnostic AI (IDx-DR for diabetic retinopathy), offers a different approach: the company contractually assumes liability for hospitals and physicians for faulty diagnoses by its system and carries its own insurance for that assumed risk. This vendor-assumed-liability model remains rare but could become more common.

General Professional Services
#

CGL Policy Gaps

Commercial general liability policies often exclude coverage for professional liability claims. The standard professional liability exclusion doesn’t cover third-party claims alleging bodily injury or property damage arising from computer software. This creates potential gaps for companies selling AI-powered products that cause physical harm.

Tech E&O Limitations

Technology errors and omissions policies typically exclude bodily injury or property damage. For AI companies serving healthcare, transportation, or energy sectors where physical harm is possible, this exclusion creates significant exposure.

What Professionals Should Ask Their Carriers
#

Essential Questions
#

Before your next policy renewal, ask your insurance broker or carrier directly:

  1. “Does my policy explicitly cover claims arising from my use of AI tools in professional services?”

    • Don’t assume coverage exists, get it in writing
  2. “Are there any AI-related exclusions, sub-limits, or endorsements in my policy?”

    • Review all policy language, not just the declarations page
  3. “If I’m sanctioned for using AI-generated content that was incorrect, is that covered?”

    • Particularly relevant for attorneys after Mata v. Avianca
  4. “Does coverage apply if I rely on an AI recommendation that turns out to be wrong?”

    • Relevant for physicians, financial advisors, and other licensed professionals
  5. “What disclosure obligations do I have about AI use?”

    • Some policies may require notification of AI adoption
  6. “Are my cyber and professional liability policies coordinated for AI risks?”

    • AI breaches may involve both policy types

For Healthcare Providers Specifically
#

  • Does my policy cover claims arising from AI diagnostic tool errors?
  • If I override AI and the AI was right, am I covered? If I follow AI and it’s wrong?
  • Are AI tools I’m using FDA-cleared, and does that affect coverage?
  • Does my hospital’s coverage extend to AI tools I use in practice?

For Attorneys Specifically
#

  • Does using AI for legal research constitute “professional services” under my policy?
  • Are bar disciplinary proceedings for AI misuse covered as claims?
  • What notification requirements apply when I adopt new AI tools?
  • Does my coverage extend to paralegals and staff using AI?

Emerging AI-Specific Coverage
#

Affirmative AI Insurance
#

The good news: new insurance products are emerging specifically designed for AI risks.

Armilla AI Liability Insurance

In April 2025, Armilla Insurance Services launched what it describes as the first affirmative AI liability insurance policy, backed by Lloyd’s of London underwriters including Chaucer.

The policy specifically covers:

  • AI underperformance and failure to perform as intended
  • AI-generated errors, hallucinations, and inaccuracies causing damages
  • Deteriorating AI model performance over time
  • Mechanical failures and deviations from expected behavior

As Armilla CEO Karthik Ramakrishnan explained: “Businesses are racing to deploy AI, but their risk management and insurance tools haven’t kept pace. There’s a growing concern of ‘silent AI cover’, the uncertainty of whether existing policies will respond to AI-specific failures.”

Other Emerging Products

Some insurers are offering explicit AI professional services coverage as endorsements to existing policies, designed to expand rather than restrict protection. However, these products remain relatively new and vary significantly in scope.

Risk Mitigation Strategies
#

Document Everything
#

Maintain detailed records of:

  • Which AI tools you use and for what purposes
  • Vendor representations about accuracy and reliability
  • Your validation and verification procedures
  • Training you’ve completed on AI tool limitations
  • Instances where you overrode AI recommendations

Implement Verification Protocols
#

Insurers may look more favorably on claims where professionals can demonstrate:

  • Systematic verification of AI outputs before reliance
  • Human review protocols for AI-generated work
  • Quality control measures appropriate to the risk

Vendor Contract Review
#

When selecting AI tools, consider:

  • Does the vendor carry liability insurance for their product?
  • Will they contractually indemnify you for errors?
  • What representations do they make about accuracy?
  • What limitations do they disclose?

Stay Current on Policy Terms
#

  • Review your policy annually for AI-related changes
  • Ask about AI coverage at each renewal
  • Consider supplemental AI-specific coverage if available
  • Monitor industry developments as exclusions proliferate

The “Silent AI” Problem
#

Perhaps the greatest risk is what insurers call “silent AI”, situations where AI-driven risks are neither explicitly covered nor excluded by existing policies. This ambiguity means:

  • You may believe you’re covered when you’re not
  • Claims may be denied based on policy interpretations you didn’t anticipate
  • Litigation over coverage can be costly and uncertain

As AI adoption accelerates faster than insurance frameworks can adapt, professionals bear the risk of this gap. The prudent approach is to assume coverage may not exist and verify explicitly rather than discover the gap when a claim is denied.

Resources
#

Related Pages:

Related

AI-Specific Professional Liability Insurance: Emerging Coverage for Emerging Risks

The Insurance Industry’s AI Reckoning # The insurance industry faces an unprecedented challenge: how to underwrite risks from technology that even its creators don’t fully understand. As AI systems increasingly make decisions that traditionally required human judgment, and increasingly cause harm when those decisions go wrong, insurers are scrambling to adapt products designed for a pre-AI world.

AI Hallucinations & Professional Liability: Malpractice Exposure for Lawyers Using LLMs

Beyond Sanctions: The Malpractice Dimension of AI Hallucinations # Court sanctions for AI-generated fake citations have dominated headlines since Mata v. Avianca. But sanctions are only the visible tip of a much larger iceberg. The deeper exposure lies in professional malpractice liability, claims by clients whose cases were harmed by AI-generated errors that their attorneys failed to catch.

AI Insurance Industry Crisis & Coverage Gaps

The AI Insurance Crisis: Uninsurable Risk? # The insurance industry faces an unprecedented challenge: how to price and cover risks from technology that even its creators cannot fully predict. As AI systems generate outputs that cause real-world harm, defamatory hallucinations, copyright infringement, discriminatory decisions, even deaths, insurers are confronting a fundamental question: can AI risks be insured at all?

AI Debt Collection and FDCPA Violations: Legal Guide

When AI Becomes the Debt Collector # The debt collection industry, historically notorious for harassment and intimidation, is rapidly adopting artificial intelligence. AI chatbots can contact millions of debtors in days. Voice cloning technology creates synthetic agents indistinguishable from humans. Algorithmic systems decide who gets sued, when to call, and how aggressively to pursue payment.

AI Defamation and Hallucination Liability

The New Frontier of Defamation Law # Courts are now testing what attorneys describe as a “new frontier of defamation law” as AI systems increasingly generate false, damaging statements about real people. When ChatGPT falsely accused a radio host of embezzlement, when Bing confused a veteran with a convicted terrorist, when Meta AI claimed a conservative activist participated in the January 6 riot, these weren’t glitches. They represent a fundamental challenge to defamation law built on human publishers and human intent.