A Global Regulatory Divergence#
The United States, European Union, and China are the world’s three dominant AI powers. Together they produce most frontier AI research, deploy most commercial AI systems, and shape most global AI policy. Yet their approaches to AI regulation, and particularly AI liability, are strikingly different.
Understanding these differences matters for anyone operating across borders, and reveals something deeper: how different legal and political cultures conceptualize the relationship between innovation, harm, and accountability.
The European Union: Comprehensive Ex Ante Regulation#
The EU has taken the most comprehensive regulatory approach through the AI Act, which entered into force in 2024 with provisions phasing in through 2027.
The Risk-Based Framework#
The AI Act categorizes AI systems by risk level:
Unacceptable Risk (Prohibited)
- Social scoring systems
- Real-time biometric surveillance (with exceptions)
- Manipulation of vulnerable persons
- Certain predictive policing applications
High Risk (Heavily Regulated)
- Biometric identification
- Critical infrastructure management
- Educational and vocational access decisions
- Employment decisions
- Credit and insurance decisions
- Law enforcement applications
- Migration and asylum processing
Limited Risk (Transparency Obligations)
- Chatbots and conversational AI
- Emotion recognition systems
- Deep fakes and synthetic content
Minimal Risk (Unregulated)
- AI-enabled video games
- Spam filters
- Most consumer applications
Obligations for High-Risk Systems#
Providers of high-risk AI systems must:
- Implement risk management systems throughout the AI lifecycle
- Ensure training data meets quality standards
- Maintain technical documentation
- Enable logging and traceability
- Provide transparency to users
- Allow human oversight
- Ensure accuracy, robustness, and cybersecurity
Deployers (users) of high-risk systems have their own obligations, including human oversight and monitoring.
Liability Implications#
The AI Act creates liability primarily through compliance requirements. Non-compliance can result in:
- Fines up to €35 million or 7% of global turnover
- Product recalls or bans
- Reputational damage from public enforcement actions
The EU is also developing a separate AI Liability Directive that would create civil liability rules, including:
- Presumption of causation for AI harms where the defendant violated AI Act requirements
- Disclosure obligations requiring defendants to provide evidence about AI systems
- Modified burden of proof in certain circumstances
Philosophy#
The EU approach reflects precautionary principles: regulate first, permit innovation within defined boundaries. It prioritizes predictability and rights protection over speed of deployment. The implicit theory is that clear rules enable responsible innovation rather than chilling it.
The United States: Sectoral and Ex Post Liability#
The US has no comprehensive federal AI legislation. Instead, AI is regulated (when at all) through:
Sectoral Regulators#
- FDA regulates AI medical devices
- NHTSA oversees autonomous vehicles
- FTC enforces against deceptive AI practices
- EEOC addresses AI in employment
- CFPB monitors AI in consumer finance
- SEC examines AI in securities markets
Each regulator applies existing authority to AI within its domain, often through guidance, enforcement actions, and occasional rulemaking.
Executive Orders#
The October 2023 Executive Order on AI established various requirements for federal use and procurement of AI, directed agencies to use existing authorities, and created reporting requirements for developers of frontier models. However, executive orders can be modified or rescinded by subsequent administrations and have limited force against private actors.
State Legislation#
States are filling federal gaps:
- Colorado enacted an AI discrimination law effective 2026
- California has considered multiple AI bills (SB 1047 vetoed, others pending)
- Illinois requires notice for AI in video interviews
- New York City requires bias audits for AI hiring tools
This creates a patchwork that varies by jurisdiction.
Common Law Liability#
The primary US approach to AI harm remains ex post liability through existing legal doctrines:
- Negligence when AI use falls below the standard of care
- Product liability when AI constitutes a defective product
- Statutory claims under employment discrimination, consumer protection, and other laws
Courts adapt existing doctrine to AI contexts, with results varying by jurisdiction and fact pattern.
Philosophy#
The US approach reflects skepticism of ex ante regulation and confidence in market mechanisms and tort liability. The implicit theory is that innovation should proceed unless and until specific harms justify specific interventions. Liability after the fact provides incentives without pre-emptive restrictions.
China: State-Directed Techno-Nationalism#
China’s AI regulation reflects its distinct political economy: nominally private companies operating within state direction, and technology policy serving national strategic goals.
Specific AI Regulations#
China has enacted targeted AI rules rather than comprehensive legislation:
Algorithm Recommendation Regulations (2022)
- Require transparency about recommendation algorithms
- Prohibit price discrimination through algorithms
- Mandate options to decline personalized recommendations
- Require impact assessments for algorithms affecting public opinion
Deep Synthesis (Deepfake) Regulations (2023)
- Require labeling of synthetic content
- Mandate identity verification for deepfake creators
- Prohibit deepfakes without subject consent
- Impose platform responsibility for synthetic content
Generative AI Measures (2023)
- Require registration for public-facing generative AI services
- Mandate training data compliance with content laws
- Require content to uphold “socialist core values”
- Impose security assessments before public deployment
Content Control Integration#
Chinese AI regulation is inseparable from content control. Generative AI must not produce content that:
- Subverts state power
- Undermines national unity
- Promotes terrorism or extremism
- Spreads false information
- Contains prohibited content under other laws
This integrates AI governance into the broader apparatus of information control.
Liability Framework#
Chinese liability for AI harm operates through:
- Administrative enforcement by the Cyberspace Administration of China (CAC) and other regulators
- Civil liability under the Civil Code, which includes provisions on technology-related harm
- Criminal liability for serious violations, particularly involving prohibited content or national security
Enforcement tends to be discretionary and politically influenced, with companies understanding that compliance includes responsiveness to informal government guidance.
Philosophy#
China’s approach reflects state primacy in technology development. AI must serve national goals, economic development, social stability, international competitiveness, and regulation ensures alignment with those goals. Individual rights exist but are subordinate to collective and state interests.
Key Differences Compared#
Scope#
| Aspect | EU | US | China |
|---|---|---|---|
| Approach | Comprehensive horizontal | Sectoral vertical | Targeted specific |
| Timing | Ex ante (pre-deployment) | Ex post (liability after harm) | Hybrid (approval + enforcement) |
| Enforcement | Regulatory agencies | Courts + regulators | State administrative |
| Rights focus | Individual data/dignity | Consumer protection | State/collective |
Risk Tolerance#
The three systems have fundamentally different risk tolerances:
EU: Low risk tolerance. The precautionary principle means uncertain risks justify precautionary restrictions. Better to regulate potential harms than permit them pending proof.
US: Higher risk tolerance. Innovation benefits are weighed against speculative harms. Regulation follows demonstrated problems, not anticipated ones.
China: Risk tolerance varies by domain. Commercial applications face moderate oversight; applications touching political stability face intensive control.
Innovation Implications#
EU: Compliance costs and pre-approval requirements may slow deployment. The flipside: clear rules reduce uncertainty and create a defined compliance path.
US: Faster deployment but higher liability uncertainty. Companies may face bet-the-company litigation without clear regulatory guidance.
China: Domestic deployment can be rapid within defined boundaries, but boundaries are politically determined and can shift unpredictably.
Cross-Border Complications#
Most significant AI systems operate across jurisdictions, creating conflicts:
Data Flows#
EU data protection rules restrict training data transfers. US companies operating in Europe face compliance obligations. Chinese data localization requirements keep certain data in China.
Standard Divergence#
A high-risk AI system under the EU Act might face no pre-deployment requirements in the US but content review in China. Designing for global compliance is increasingly difficult.
Enforcement Extraterritoriality#
The EU AI Act applies to any AI system whose outputs are used in the EU, regardless of where the provider is located. This extends EU regulation globally in practice.
Mutual Recognition#
There are no mutual recognition agreements for AI compliance. Certifying an AI system as compliant in one jurisdiction provides no credit in others.
Liability Forum Shopping#
These differences create incentives for liability forum shopping:
- Plaintiffs may prefer US courts for AI harms due to jury trials, class actions, and punitive damages
- Defendants may prefer EU administrative processes with capped penalties and no private litigation
- Both may avoid Chinese processes due to unpredictability and limited due process
Jurisdictional rules, recognition of judgments, and enforcement mechanisms will determine where AI liability ultimately gets resolved.
Convergence or Divergence?#
Will these systems converge over time?
Arguments for convergence:
- Global companies need consistent rules
- International trade pressure toward harmonization
- Technical standards may drive regulatory alignment
- Academic and policy exchange spreads ideas
Arguments for divergence:
- Regulatory competition as jurisdictions seek advantage
- Path dependence from existing legal traditions
- Different political values about rights, state power, innovation
- Strategic competition in AI as geopolitical issue
The most likely outcome is partial convergence on technical standards with persistent divergence on values-laden questions like content control, surveillance, and human rights.
Strategic Implications#
For practitioners, the three-system framework suggests:
For AI Developers#
- Design for EU compliance as the highest common denominator
- Maintain jurisdiction-specific deployment controls
- Expect US liability exposure regardless of base location
- Treat Chinese market as distinct and politically sensitive
For AI Users#
- Understand which jurisdictions’ rules apply to your use
- Document compliance with multiple frameworks
- Consider jurisdiction in vendor selection
- Plan for regulatory evolution
For Policymakers#
- Monitor other jurisdictions’ approaches
- Consider extraterritorial effects of domestic rules
- Engage in international standard-setting
- Anticipate regulatory arbitrage
Conclusion#
The US, EU, and China are conducting three simultaneous experiments in AI governance. Each reflects different assumptions about innovation, risk, rights, and the role of the state. Each will produce different outcomes in terms of AI development, deployment, and harm.
There will be no single global answer to AI standard of care. Professionals operating in this space must understand all three systems and navigate their tensions. The fragmented regulatory landscape is not a temporary condition to be resolved but a permanent feature of global AI governance.
Those who understand the differences, and can operate compliantly across all three, will have significant advantages as AI reshapes every industry and jurisdiction.