Skip to main content
  1. AI Liability News & Analysis/

Three Giants: Comparing AI Regulation in China, the EU, and the United States

Table of Contents

A Global Regulatory Divergence
#

The United States, European Union, and China are the world’s three dominant AI powers. Together they produce most frontier AI research, deploy most commercial AI systems, and shape most global AI policy. Yet their approaches to AI regulation, and particularly AI liability, are strikingly different.

Understanding these differences matters for anyone operating across borders, and reveals something deeper: how different legal and political cultures conceptualize the relationship between innovation, harm, and accountability.

The European Union: Comprehensive Ex Ante Regulation
#

The EU has taken the most comprehensive regulatory approach through the AI Act, which entered into force in 2024 with provisions phasing in through 2027.

The Risk-Based Framework
#

The AI Act categorizes AI systems by risk level:

Unacceptable Risk (Prohibited)

  • Social scoring systems
  • Real-time biometric surveillance (with exceptions)
  • Manipulation of vulnerable persons
  • Certain predictive policing applications

High Risk (Heavily Regulated)

  • Biometric identification
  • Critical infrastructure management
  • Educational and vocational access decisions
  • Employment decisions
  • Credit and insurance decisions
  • Law enforcement applications
  • Migration and asylum processing

Limited Risk (Transparency Obligations)

  • Chatbots and conversational AI
  • Emotion recognition systems
  • Deep fakes and synthetic content

Minimal Risk (Unregulated)

  • AI-enabled video games
  • Spam filters
  • Most consumer applications

Obligations for High-Risk Systems
#

Providers of high-risk AI systems must:

  • Implement risk management systems throughout the AI lifecycle
  • Ensure training data meets quality standards
  • Maintain technical documentation
  • Enable logging and traceability
  • Provide transparency to users
  • Allow human oversight
  • Ensure accuracy, robustness, and cybersecurity

Deployers (users) of high-risk systems have their own obligations, including human oversight and monitoring.

Liability Implications
#

The AI Act creates liability primarily through compliance requirements. Non-compliance can result in:

  • Fines up to €35 million or 7% of global turnover
  • Product recalls or bans
  • Reputational damage from public enforcement actions

The EU is also developing a separate AI Liability Directive that would create civil liability rules, including:

  • Presumption of causation for AI harms where the defendant violated AI Act requirements
  • Disclosure obligations requiring defendants to provide evidence about AI systems
  • Modified burden of proof in certain circumstances

Philosophy
#

The EU approach reflects precautionary principles: regulate first, permit innovation within defined boundaries. It prioritizes predictability and rights protection over speed of deployment. The implicit theory is that clear rules enable responsible innovation rather than chilling it.

The United States: Sectoral and Ex Post Liability
#

The US has no comprehensive federal AI legislation. Instead, AI is regulated (when at all) through:

Sectoral Regulators
#

  • FDA regulates AI medical devices
  • NHTSA oversees autonomous vehicles
  • FTC enforces against deceptive AI practices
  • EEOC addresses AI in employment
  • CFPB monitors AI in consumer finance
  • SEC examines AI in securities markets

Each regulator applies existing authority to AI within its domain, often through guidance, enforcement actions, and occasional rulemaking.

Executive Orders
#

The October 2023 Executive Order on AI established various requirements for federal use and procurement of AI, directed agencies to use existing authorities, and created reporting requirements for developers of frontier models. However, executive orders can be modified or rescinded by subsequent administrations and have limited force against private actors.

State Legislation
#

States are filling federal gaps:

  • Colorado enacted an AI discrimination law effective 2026
  • California has considered multiple AI bills (SB 1047 vetoed, others pending)
  • Illinois requires notice for AI in video interviews
  • New York City requires bias audits for AI hiring tools

This creates a patchwork that varies by jurisdiction.

Common Law Liability
#

The primary US approach to AI harm remains ex post liability through existing legal doctrines:

  • Negligence when AI use falls below the standard of care
  • Product liability when AI constitutes a defective product
  • Statutory claims under employment discrimination, consumer protection, and other laws

Courts adapt existing doctrine to AI contexts, with results varying by jurisdiction and fact pattern.

Philosophy
#

The US approach reflects skepticism of ex ante regulation and confidence in market mechanisms and tort liability. The implicit theory is that innovation should proceed unless and until specific harms justify specific interventions. Liability after the fact provides incentives without pre-emptive restrictions.

China: State-Directed Techno-Nationalism
#

China’s AI regulation reflects its distinct political economy: nominally private companies operating within state direction, and technology policy serving national strategic goals.

Specific AI Regulations
#

China has enacted targeted AI rules rather than comprehensive legislation:

Algorithm Recommendation Regulations (2022)

  • Require transparency about recommendation algorithms
  • Prohibit price discrimination through algorithms
  • Mandate options to decline personalized recommendations
  • Require impact assessments for algorithms affecting public opinion

Deep Synthesis (Deepfake) Regulations (2023)

  • Require labeling of synthetic content
  • Mandate identity verification for deepfake creators
  • Prohibit deepfakes without subject consent
  • Impose platform responsibility for synthetic content

Generative AI Measures (2023)

  • Require registration for public-facing generative AI services
  • Mandate training data compliance with content laws
  • Require content to uphold “socialist core values”
  • Impose security assessments before public deployment

Content Control Integration
#

Chinese AI regulation is inseparable from content control. Generative AI must not produce content that:

  • Subverts state power
  • Undermines national unity
  • Promotes terrorism or extremism
  • Spreads false information
  • Contains prohibited content under other laws

This integrates AI governance into the broader apparatus of information control.

Liability Framework
#

Chinese liability for AI harm operates through:

  • Administrative enforcement by the Cyberspace Administration of China (CAC) and other regulators
  • Civil liability under the Civil Code, which includes provisions on technology-related harm
  • Criminal liability for serious violations, particularly involving prohibited content or national security

Enforcement tends to be discretionary and politically influenced, with companies understanding that compliance includes responsiveness to informal government guidance.

Philosophy
#

China’s approach reflects state primacy in technology development. AI must serve national goals, economic development, social stability, international competitiveness, and regulation ensures alignment with those goals. Individual rights exist but are subordinate to collective and state interests.

Key Differences Compared
#

Scope
#

AspectEUUSChina
ApproachComprehensive horizontalSectoral verticalTargeted specific
TimingEx ante (pre-deployment)Ex post (liability after harm)Hybrid (approval + enforcement)
EnforcementRegulatory agenciesCourts + regulatorsState administrative
Rights focusIndividual data/dignityConsumer protectionState/collective

Risk Tolerance
#

The three systems have fundamentally different risk tolerances:

EU: Low risk tolerance. The precautionary principle means uncertain risks justify precautionary restrictions. Better to regulate potential harms than permit them pending proof.

US: Higher risk tolerance. Innovation benefits are weighed against speculative harms. Regulation follows demonstrated problems, not anticipated ones.

China: Risk tolerance varies by domain. Commercial applications face moderate oversight; applications touching political stability face intensive control.

Innovation Implications
#

EU: Compliance costs and pre-approval requirements may slow deployment. The flipside: clear rules reduce uncertainty and create a defined compliance path.

US: Faster deployment but higher liability uncertainty. Companies may face bet-the-company litigation without clear regulatory guidance.

China: Domestic deployment can be rapid within defined boundaries, but boundaries are politically determined and can shift unpredictably.

Cross-Border Complications
#

Most significant AI systems operate across jurisdictions, creating conflicts:

Data Flows
#

EU data protection rules restrict training data transfers. US companies operating in Europe face compliance obligations. Chinese data localization requirements keep certain data in China.

Standard Divergence
#

A high-risk AI system under the EU Act might face no pre-deployment requirements in the US but content review in China. Designing for global compliance is increasingly difficult.

Enforcement Extraterritoriality
#

The EU AI Act applies to any AI system whose outputs are used in the EU, regardless of where the provider is located. This extends EU regulation globally in practice.

Mutual Recognition
#

There are no mutual recognition agreements for AI compliance. Certifying an AI system as compliant in one jurisdiction provides no credit in others.

Liability Forum Shopping
#

These differences create incentives for liability forum shopping:

  • Plaintiffs may prefer US courts for AI harms due to jury trials, class actions, and punitive damages
  • Defendants may prefer EU administrative processes with capped penalties and no private litigation
  • Both may avoid Chinese processes due to unpredictability and limited due process

Jurisdictional rules, recognition of judgments, and enforcement mechanisms will determine where AI liability ultimately gets resolved.

Convergence or Divergence?
#

Will these systems converge over time?

Arguments for convergence:

  • Global companies need consistent rules
  • International trade pressure toward harmonization
  • Technical standards may drive regulatory alignment
  • Academic and policy exchange spreads ideas

Arguments for divergence:

  • Regulatory competition as jurisdictions seek advantage
  • Path dependence from existing legal traditions
  • Different political values about rights, state power, innovation
  • Strategic competition in AI as geopolitical issue

The most likely outcome is partial convergence on technical standards with persistent divergence on values-laden questions like content control, surveillance, and human rights.

Strategic Implications
#

For practitioners, the three-system framework suggests:

For AI Developers
#

  • Design for EU compliance as the highest common denominator
  • Maintain jurisdiction-specific deployment controls
  • Expect US liability exposure regardless of base location
  • Treat Chinese market as distinct and politically sensitive

For AI Users
#

  • Understand which jurisdictions’ rules apply to your use
  • Document compliance with multiple frameworks
  • Consider jurisdiction in vendor selection
  • Plan for regulatory evolution

For Policymakers
#

  • Monitor other jurisdictions’ approaches
  • Consider extraterritorial effects of domestic rules
  • Engage in international standard-setting
  • Anticipate regulatory arbitrage

Conclusion
#

The US, EU, and China are conducting three simultaneous experiments in AI governance. Each reflects different assumptions about innovation, risk, rights, and the role of the state. Each will produce different outcomes in terms of AI development, deployment, and harm.

There will be no single global answer to AI standard of care. Professionals operating in this space must understand all three systems and navigate their tensions. The fragmented regulatory landscape is not a temporary condition to be resolved but a permanent feature of global AI governance.

Those who understand the differences, and can operate compliantly across all three, will have significant advantages as AI reshapes every industry and jurisdiction.

Related

AI Liability Legal Timeline

AI Liability Legal Timeline A chronological guide to landmark cases, regulations, and developments shaping the legal landscape for AI liability. Key Developments in AI Liability Law # 2018 Epic Sepsis Model Deployed Epic Systems deploys sepsis prediction algorithm to hundreds of hospitals. Later studies will reveal significant performance gaps between clinical validation and real-world deployment, raising questions about hospital liability for algorithm selection. March 2018 Uber AV Fatality - Tempe, AZ First pedestrian fatality involving a fully autonomous vehicle (Uber ATG). Raises fundamental questions about manufacturer vs. operator liability for autonomous systems. Criminal charges filed against safety driver; civil settlements reached. 2019 FDA De Novo Clearance for IDx-DR First FDA clearance for autonomous AI diagnostic device - diabetic retinopathy screening that operates without physician oversight. Establishes precedent for AI systems that can diagnose without human intermediary. 2020 EEOC Begins AI Hiring Investigations Equal Employment Opportunity Commission begins investigating AI-powered hiring tools for potential discrimination under Title VII. Signals increased regulatory scrutiny of employment algorithms. February 2021 Mobley v. Workday Filed Landmark class action alleging Workday’s AI hiring tools discriminate against Black, disabled, and older applicants. First major federal court test of AI hiring discrimination theories. 2022 Illinois BIPA Settlements Surge Biometric Information Privacy Act litigation accelerates, with Facebook ($650M), Google ($100M), and TikTok ($92M) settlements. Establishes significant liability exposure for facial recognition and biometric AI. June 2023 Mata v. Avianca - AI Hallucination Sanctions New York federal judge sanctions attorneys for submitting ChatGPT-generated brief with fabricated case citations. Becomes defining case for attorney competence obligations when using generative AI. November 2023 California State Bar AI Guidance California becomes first state bar to issue practical guidance on attorney AI use, addressing competence, confidentiality, and verification duties. Sets template for other jurisdictions. January 2024 Florida Ethics Opinion 24-1 Florida Bar issues comprehensive ethics opinion on AI, emphasizing verification requirements and establishing “reasonable attorney” standard for AI tool competence. April 2024 New York State Bar AI Report NYSBA Task Force releases comprehensive report suggesting that refusing to use AI may itself raise competence concerns in some circumstances - a significant shift in the standard of care discussion. July 2024 ABA Formal Opinion 512 American Bar Association issues national guidance on AI in legal practice, establishing baseline ethical obligations applicable across all jurisdictions. August 2024 EU AI Act Enters Force European Union’s comprehensive AI regulation takes effect, with extraterritorial reach affecting US companies. Establishes risk-based framework and mandatory requirements for high-risk AI systems. February 2025 Texas Ethics Opinion 705 Texas State Bar joins states with formal AI ethics guidance, emphasizing practical verification workflows and client disclosure requirements. Emerging Trends # The “Failure to Use AI” Question # Perhaps the most significant emerging question: When does failure to use available AI tools constitute malpractice? The NYSBA’s suggestion that AI refusal may raise competence concerns signals a potential inversion of traditional liability analysis.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI in Pharmaceutical Drug Discovery Liability

AI in Drug Discovery: The New Liability Frontier # Artificial intelligence is transforming pharmaceutical development at unprecedented scale. The AI drug discovery market has grown to approximately $2.5-7 billion in 2025, with projections reaching $16-134 billion by 2034 depending on the analysis. AI-discovered molecules reportedly achieve an 80-90% success rate in Phase I trials, substantially higher than traditional discovery methods.

International AI Regulation: A Global Comparison

As AI systems become integral to commerce, healthcare, and daily life, jurisdictions worldwide are racing to establish regulatory frameworks. The approaches vary dramatically, from the EU’s comprehensive risk-based legislation to the UK’s sector-specific principles, from China’s content-focused rules to Canada’s failed attempt at comprehensive AI law. Understanding these frameworks is essential for any organization deploying AI across borders.

Anesthesiology AI Standard of Care: Monitoring, Prediction, and Liability

AI Enters the Operating Room # Anesthesiology represents a unique frontier for artificial intelligence in medicine. The specialty’s foundation, continuous physiological monitoring with real-time decision-making, makes it particularly amenable to AI augmentation. From predictive algorithms that anticipate hypotension before it occurs to computer vision systems that guide regional anesthesia, AI is reshaping perioperative care. But with these advances come profound liability questions: When an AI system fails to predict a critical event that an experienced anesthesiologist might have anticipated, who is responsible?

Clinical Pharmacy AI Standard of Care: Drug Interaction Checking, Dosing Optimization, and Liability

AI Transforms Clinical Pharmacy Practice # Clinical pharmacy has become one of the most AI-intensive areas of healthcare, often without practitioners fully recognizing it. From the drug interaction alerts that fire in every EHR to sophisticated dosing algorithms for narrow therapeutic index drugs, AI and machine learning systems are making millions of medication-related decisions daily. These clinical decision support systems (CDSS) have become so embedded in pharmacy practice that many pharmacists cannot imagine practicing without them.