AI Standard of Care

When AI makes decisions that affect lives, what duty is owed?

We analyze evolving standards of care for AI systems in medicine, law, finance, and beyond. As artificial intelligence becomes embedded in professional practice, understanding liability, negligence, and best practices is essential for practitioners, patients, and policymakers.

Healthcare AI Standard of Care

The Medical AI Liability Landscape Healthcare represents the highest-stakes arena for AI standard of care questions. When diagnostic AI systems, clinical decision support tools, and treatment recommendation algorithms are wrong, patients die. Key Questions FDA Approval and Liability Does FDA clearance of an AI medical device establish a baseline standard of care? Courts are split: Some jurisdictions treat FDA 510(k) clearance as creating a presumption of reasonable care Others note that FDA clearance addresses safety and efficacy, not necessarily the standard of care for deployment The FDA’s evolving framework for AI/ML-based Software as a Medical Device (SaMD) adds complexity Physician Override Duties When AI recommendations conflict with clinical judgment, what must physicians do? ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Legal AI Standard of Care

AI and Attorney Professional Responsibility The legal profession faces unique standard of care challenges as AI tools become ubiquitous in practice. From legal research to document review to contract drafting, AI is transforming how lawyers work—and creating new liability risks. The Hallucination Problem The most dramatic AI failures in legal practice involve large language models that confidently fabricate case citations, statutes, and legal principles that do not exist. High-Profile Incidents Mata v. Avianca (2023) - Attorneys sanctioned for citing AI-generated fake cases in federal court Multiple subsequent cases involving similar fabricated citations Bar disciplinary proceedings against attorneys who failed to verify AI outputs The Verification Duty Courts and bar associations are converging on a clear standard: attorneys must verify all AI-generated legal research. ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Financial AI Standard of Care

Fiduciary Duties in the Age of AI Financial services face a unique standard of care challenge: fiduciary duties that predate AI must now be applied to algorithmic decision-making. What does it mean to act in a client’s best interest when an AI makes the decision? Investment Management Robo-Advisor Standards SEC and FINRA have issued guidance making clear that robo-advisors must meet the same fiduciary standards as human advisors: Suitability - AI recommendations must be suitable for the specific client Best execution - Algorithmic trading must achieve best execution Disclosure - Clients must understand they’re receiving AI-driven advice Conflicts of interest - AI optimization targets must align with client interests Algorithm Governance Investment firms deploying AI face expectations around: ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Employment AI Standard of Care

AI in Employment: A Liability Flashpoint Employment decisions represent one of the most contentious frontiers for AI liability. Automated hiring tools, resume screeners, video interview analyzers, and performance evaluation systems increasingly determine who gets jobs, promotions, and terminations. When these systems discriminate—whether intentionally designed to or through embedded bias—the legal consequences are mounting rapidly. The Scale of AI Hiring The World Economic Forum reported in 2025 that roughly 88% of companies now use AI for initial candidate screening. This massive adoption has outpaced regulatory frameworks, creating significant liability exposure for employers and technology vendors alike. ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors—data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence? This emerging standard of care encompasses both the duty to secure AI systems and the potential duty to use AI for security. ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Autonomous Vehicle AI Liability

The Autonomous Vehicle Liability Reckoning Autonomous vehicle technology promised to eliminate human error—responsible for over 90% of crashes. Instead, a new category of liability has emerged: algorithmic negligence, where AI systems make fatal errors that cannot be easily explained, predicted, or prevented. As self-driving technology scales from test fleets to consumer vehicles, courts are grappling with fundamental questions: Who bears responsibility when software kills? What disclosure duties exist for AI limitations? And does the promise of autonomy shift liability from driver to manufacturer? ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Aviation AI Safety & Air Traffic Control Liability

Aviation AI: Where “Near Perfect Performance” Meets Unprecedented Risk Aviation demands what a 50-year industry veteran called “near perfect performance.” The consequences of failure—hundreds of lives lost in seconds—make aviation AI liability fundamentally different from any other industry. As AI systems increasingly control aircraft, manage air traffic, and make split-second decisions that “humans may not fully understand or control,” the legal frameworks developed for human-piloted aviation are straining under the weight of technological change. ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Construction AI Standard of Care

AI in Construction Safety: A Rapidly Evolving Standard of Care Construction remains one of the deadliest industries in America. With approximately 1,069 fatal occupational injuries annually—accounting for nearly 20% of all workplace deaths—the industry faces relentless pressure to improve safety outcomes. Artificial intelligence promises transformative potential: predictive analytics identifying hazards before they cause harm, computer vision detecting PPE violations in real time, and autonomous equipment removing humans from dangerous tasks. ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Education AI Standard of Care

AI in Education: An Emerging Liability Crisis Educational institutions face a rapidly expanding wave of AI-related litigation. From proctoring software that disproportionately flags students of color, to AI detection tools that falsely accuse students of cheating, to massive data collection on minors—schools, testing companies, and technology vendors now confront significant liability exposure. The stakes extend beyond financial damages: these cases implicate fundamental questions of educational access, disability accommodation, and civil rights. ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>

Housing AI Standard of Care

Algorithmic Discrimination in Housing: A Civil Rights Flashpoint Housing decisions—who gets approved to rent, how homes are valued, and who receives mortgage loans—increasingly depend on algorithmic systems. These AI-powered tools promise efficiency and objectivity, but mounting evidence shows they often perpetuate and amplify the discriminatory patterns embedded in America’s housing history. For housing providers, lenders, and technology vendors, the legal exposure is significant and growing. The Scale of Algorithmic Housing Decisions Tenant screening algorithms evaluate millions of rental applications annually. Automated valuation models (AVMs) like Zillow’s Zestimate influence buyer and seller expectations across the housing market. AI-driven mortgage underwriting systems determine creditworthiness at unprecedented scale. As CFPB Director Rohit Chopra has observed: “It is tempting to think that machines crunching numbers can take bias out of the equation, but they can’t.” ...

<span title='2025-01-01 00:00:00 +0000 UTC'>January 1, 2025</span>