The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI law, and it applies to companies worldwide. If your AI system is used in the European Union, you’re subject to EU jurisdiction regardless of where your headquarters is located. For US companies serving European markets, this creates significant compliance obligations and liability exposure that cannot be ignored.
This guide provides a liability-focused analysis of the EU AI Act, covering enforcement mechanisms, penalty structures, the implications of the AI Liability Directive’s withdrawal, and sector-specific compliance considerations for healthcare, legal services, financial services, and robotics.
Extraterritorial Scope: Why US Companies Must Comply#
Like GDPR before it, the EU AI Act has extraterritorial reach. You must comply if:
- You place AI systems on the EU market
- Your AI system’s outputs are used within the EU
- You deploy AI affecting EU users, even from US servers
- EU-based clients use your AI tools
Geographic location provides no exemption. Non-EU providers must appoint an EU-based authorized representative.
Who Falls Under the AI Act?#
Providers (Article 3(3)): Companies that develop or have AI systems developed and place them on the EU market under their name or trademark.
Deployers (Article 3(4)): Organizations using AI systems under their authority, distinct from personal, non-professional use.
Importers and Distributors (Articles 3(6), 3(7)): Companies bringing third-party AI systems into the EU market.
Authorized Representatives (Article 22): Non-EU providers must designate an EU-based representative before placing high-risk AI on the market. Representatives bear compliance responsibility and face direct enforcement action.
Practical Implications for US Companies#
| Business Scenario | AI Act Applies? | Key Obligations |
|---|---|---|
| SaaS product with EU customers | Yes | Conformity assessment, documentation, EU representative |
| AI tool accessed by EU employees of US company | Likely yes | Risk assessment, transparency obligations |
| AI service for EU-based clients | Yes | Full compliance based on risk tier |
| Internal AI for US operations only | No | No EU obligations (unless outputs reach EU) |
Risk-Based Classification System#
The AI Act categorizes AI systems into four risk tiers, with obligations escalating by risk level.
Tier 1: Prohibited AI Practices#
These AI applications are banned outright, effective February 2, 2025:
Cognitive Manipulation:
- AI exploiting vulnerabilities of specific groups (age, disability)
- Subliminal techniques causing physical or psychological harm
Social Scoring:
- Government evaluation of individuals based on social behavior or personality characteristics
- Systematic classification leading to detrimental treatment
Predictive Policing (individual-level):
- AI predicting criminal behavior based solely on profiling or personality traits
Biometric Systems:
- Real-time remote biometric identification in public spaces (limited law enforcement exceptions)
- Emotion recognition in workplaces and educational institutions
- Biometric categorization inferring race, political opinions, religion, or sexual orientation
Facial Recognition:
- Untargeted scraping of facial images from internet or CCTV for database creation
Penalties for Prohibited Practices: Up to €35 million or 7% of global annual turnover, whichever is higher.
Tier 2: High-Risk AI Systems#
High-risk AI faces the most extensive obligations, fully applicable August 2, 2026 (extended to August 2, 2027 for AI in regulated products).
Annex I Systems (AI in regulated products):
- Medical devices (EU Regulation 2017/745)
- In vitro diagnostic devices
- Machinery and equipment
- Radio equipment
- Civil aviation
- Motor vehicles and components
- Marine equipment
Annex III Systems (standalone high-risk):
| Category | Examples |
|---|---|
| Biometric identification | Remote identification systems (non-real-time) |
| Critical infrastructure | AI in energy, water, transport networks |
| Education | AI determining access to education, student assessment |
| Employment | Recruitment tools, performance evaluation, termination decisions |
| Essential services | Credit scoring, insurance pricing, emergency dispatch |
| Law enforcement | Evidence evaluation, polygraph alternatives, profiling |
| Migration/asylum | Application assessment, border control |
| Justice | Sentencing assistance, legal research |
High-Risk Obligations:
- Risk management system, Continuous identification and mitigation of risks
- Data governance, Training data quality, bias testing, documentation
- Technical documentation, Comprehensive records before market placement
- Record-keeping, Automatic logging of AI operations
- Transparency, Clear instructions for deployers
- Human oversight, Mechanisms for human intervention
- Accuracy, robustness, cybersecurity, Performance standards
- Conformity assessment, Third-party or self-assessment depending on category
- EU database registration, Public registration before deployment
- Post-market monitoring, Ongoing performance surveillance
Tier 3: Limited Risk AI#
Requires transparency obligations only:
- Chatbots: Users must be informed they’re interacting with AI
- Emotion recognition: Subjects must be notified
- Biometric categorization: Notification required
- Deep fakes/synthetic media: Must be labeled as AI-generated
Effective August 2, 2025.
Tier 4: Minimal Risk AI#
Largely unregulated:
- Spam filters
- AI-enabled video games
- Basic recommendation systems
- Consumer applications without high-risk characteristics
General-Purpose AI (GPAI) Requirements#
Foundation models and large language models face dedicated requirements effective August 2, 2025.
All GPAI Models#
Documentation Requirements:
- Technical documentation describing capabilities and limitations
- Training data summary (sufficiently detailed for understanding)
- Copyright compliance documentation
- Energy consumption metrics
Transparency:
- Clear labeling of AI-generated content
- Disclosure of training methodologies
- Cooperation with downstream deployers on compliance
Systemic Risk GPAI#
Models with “systemic risk”, determined by computational power thresholds (10^25 FLOPs) or European Commission designation, face additional requirements:
- Model evaluation with standardized protocols
- Adversarial testing (red-teaming)
- Serious incident reporting to EU AI Office
- Cybersecurity protections for model weights
- Energy efficiency documentation
Current threshold: Approximately GPT-4 class models and above.
Liability Framework and Enforcement#
The AI Liability Directive Withdrawal#
On February 11, 2025, the European Commission withdrew its proposed AI Liability Directive from the 2025 Work Programme, citing “no foreseeable agreement” among Member States.
What the Directive Would Have Provided:
- Presumption of causation when AI non-compliance caused harm
- Court-ordered disclosure of AI training and operational data
- Lowered burden of proof for victims of AI harm
Without It: Victims must prove AI caused harm under existing national tort laws, often requiring expensive expert testimony and facing information asymmetry against AI developers.
The withdrawal was formally confirmed in October 2025. Executive Vice-President Henna Virkkunen defended the decision, arguing the directive would have created fragmented rules across Member States and that new liability frameworks should wait until the AI Act is fully implemented.
Critics’ Response: MEP Axel Voss warned of a “Wild West” approach to AI liability. The Center for Democracy and Technology expressed concern that victims of AI harm now lack adequate legal recourse.
The Product Liability Directive (December 2026)#
While the AI Liability Directive failed, the revised Product Liability Directive (Directive (EU) 2024/2853) succeeds, and it explicitly covers AI.
Key Changes Effective December 9, 2026:
| Old Directive | New Directive |
|---|---|
| “Products” = tangible goods | Software and AI systems are products |
| Manufacturing defects focus | Design, manufacturing, and algorithmic defects |
| Producer liability only | Importer and authorized representative liability |
| No disclosure mechanisms | Court-ordered disclosure of technical data |
| €500 damage threshold | Threshold eliminated |
What This Means:
- AI developers face strict liability for defective AI systems causing harm
- Victims don’t need to prove fault, only defect and causation
- Software updates that introduce defects create new liability
- Failure to update known vulnerabilities can constitute defect
Penalty Structure#
The AI Act establishes three penalty tiers:
| Violation Category | Maximum Penalty |
|---|---|
| Prohibited AI practices | €35M or 7% global turnover |
| High-risk system violations | €15M or 3% global turnover |
| Incorrect information to authorities | €7.5M or 1.5% global turnover |
For SMEs and startups, penalties are calculated at the lower absolute amount or percentage.
Enforcement Architecture#
National Level:
- Each Member State designates Market Surveillance Authorities
- Authorities can order withdrawal from market, impose fines, require modifications
EU Level:
- European AI Office (established 2024) coordinates enforcement
- Directly supervises GPAI providers
- Develops guidelines, codes of practice, and technical standards
Private Enforcement:
- Product Liability Directive enables civil claims
- National courts hear cases under Member State law
- Cross-border claims possible under Brussels Regulation
Sector-Specific Compliance#
Healthcare AI#
AI in healthcare faces some of the strictest requirements under both the AI Act and EU medical device regulations.
Classification:
- Most diagnostic/therapeutic AI: High-risk (Annex I via Medical Device Regulation)
- Administrative AI (scheduling, billing): Lower risk unless affecting care decisions
Key Obligations:
- Clinical evaluation and post-market clinical follow-up
- CE marking under MDR plus AI Act conformity
- Integration with existing medical device quality management systems
- Enhanced cybersecurity for connected devices
Liability Exposure:
- Product liability for defective medical AI
- Professional negligence if clinicians over-rely on AI
- Hospital liability for inadequate AI governance
Professional Standard Implications: Healthcare providers deploying AI must establish:
- Clinical validation protocols before use
- Human oversight requirements
- Documentation of AI-assisted decisions
- Training for clinical staff on AI limitations
Legal Services AI#
AI in legal practice triggers both high-risk classification and professional responsibility concerns.
Classification:
- AI for legal research/document drafting: Limited risk (transparency required)
- AI assisting judicial decisions: High-risk (Annex III)
- AI in access to justice contexts: High-risk
Professional Implications: EU Member States impose professional obligations on lawyers using AI:
- Duty to verify AI-generated legal content
- Prohibition on delegating professional judgment to AI
- Client disclosure requirements for AI use
- Competence requirements for AI tool selection
Liability Exposure:
- Malpractice if AI hallucinations go unchecked
- Breach of confidentiality if AI processes client data improperly
- Regulatory discipline for inadequate AI oversight
See: Legal AI Hallucination Cases for documented disciplinary actions.
Financial Services AI#
Financial AI faces layered regulation under the AI Act and sectoral financial regulations.
Classification:
- Credit scoring: High-risk (Annex III)
- Insurance risk assessment: High-risk
- Fraud detection: Varies by implementation
- Robo-advisors: High-risk if affecting significant financial decisions
Regulatory Overlay:
- European Banking Authority (EBA) guidelines on AI in credit
- EIOPA guidance on AI in insurance
- MiFID II suitability requirements for AI investment advice
- DORA (Digital Operational Resilience Act) cybersecurity requirements
Liability Exposure:
- Discrimination claims for biased lending/insurance AI
- Consumer protection violations for opaque AI decisions
- Regulatory fines for DORA non-compliance
- Professional liability for unsuitable AI-driven advice
Robotics and Autonomous Systems#
Physical AI systems, including industrial robots, autonomous vehicles, and service robots, face overlapping product safety and AI Act requirements.
Classification:
- Industrial robots: High-risk (Annex I via Machinery Regulation)
- Autonomous vehicles: High-risk (Annex I via vehicle type approval)
- Service robots: Varies by function and risk profile
- Drones: High-risk if in critical infrastructure
Key Regulatory Overlap:
- Machinery Regulation (EU) 2023/1230 (replacing Machinery Directive)
- General Product Safety Regulation 2023/988
- Motor Vehicle Type Approval regulations
- AI Act conformity for AI components
Liability Exposure:
- Product liability for physical harm from robotic systems
- Strict liability under Product Liability Directive
- Potential criminal liability for serious safety violations
- Workers’ compensation implications for workplace robot injuries
See: Agentic AI Liability for autonomous system liability analysis.
Employment and HR AI#
AI in employment decisions faces some of the AI Act’s most prescriptive requirements.
Classification: All of the following are high-risk:
- AI for job advertisement targeting
- Recruitment and applicant screening
- Candidate assessment and selection
- Performance monitoring and evaluation
- Promotion and termination decisions
Key Obligations:
- Bias testing and documentation
- Human review of AI-influenced decisions
- Transparency to job applicants about AI use
- Record retention for audit purposes
Liability Exposure:
- Employment discrimination claims
- GDPR violations for automated decision-making (Article 22)
- Works council/union challenges in jurisdictions with co-determination
- Individual complaints to data protection authorities
Implementation Timeline#
| Date | Milestone | Key Actions Required |
|---|---|---|
| August 1, 2024 | AI Act enters into force | Begin compliance planning |
| February 2, 2025 | Prohibited practices banned; AI literacy required | Ensure no prohibited AI; train staff |
| August 2, 2025 | GPAI rules apply; governance operational | Foundation model documentation |
| December 9, 2026 | Product Liability Directive effective | Product liability readiness |
| August 2, 2026 | High-risk requirements fully applicable | Conformity assessments complete |
| August 2, 2027 | Extended deadline for regulated products | Annex I system compliance |
Compliance Recommendations for US Companies#
Immediate Actions (2025)#
- Inventory AI systems, Map all AI deployments with EU market exposure
- Classify by risk tier, Determine which systems are high-risk, limited risk, or minimal risk
- Assess prohibited practices, Ensure no prohibited AI applications
- GPAI evaluation, If deploying foundation models, prepare documentation requirements
- Appoint EU representative, Required for non-EU providers of high-risk systems
Medium-Term Actions (2025-2026)#
- Technical documentation, Develop comprehensive documentation for high-risk systems
- Conformity assessment planning, Determine self-assessment vs. third-party assessment needs
- Bias testing protocols, Implement and document bias testing for applicable systems
- Human oversight mechanisms, Design intervention capabilities into AI workflows
- Incident response procedures, Prepare for AI Office incident reporting
Governance Structure#
Organizations should establish:
- AI Governance Committee with board-level oversight
- AI Risk Officer or designated compliance function
- Technical Documentation Repository for regulatory access
- Incident Response Team for AI-related events
- Training Program for AI literacy across the organization
Frequently Asked Questions#
Does the EU AI Act apply to my US company?
What happened to the EU AI Liability Directive?
When do high-risk AI requirements take effect?
What are the maximum penalties under the AI Act?
How does the Product Liability Directive affect AI liability?
Can my company face liability for AI hallucinations under EU law?
Related Resources#
AI Standard of Care Resources:
- International AI Regulation Comparison, EU, UK, China, Canada frameworks
- AI Product Liability, Software as product, AI LEAD Act analysis
- Agentic AI Liability, Autonomous AI agent liability frameworks
- AI Insurance Coverage, Emerging AI insurance products
Sector-Specific Guidance:
- Healthcare AI, Medical AI standards and liability
- Legal AI Hallucinations, Attorney discipline for AI misuse
- Financial Services AI, Algorithmic trading and robo-advisor standards
Need EU AI Act Compliance Guidance?
The EU AI Act creates binding obligations for companies worldwide. From risk classification to conformity assessment to liability exposure, understanding your compliance requirements is essential. Our resources help you navigate AI regulation across jurisdictions.
Explore AI Compliance Resources