The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI law, and it applies to companies worldwide. If your AI system is used in the European Union, you’re subject to EU jurisdiction regardless of where your headquarters is located. For US companies serving European markets, this creates significant compliance obligations and liability exposure that cannot be ignored.
A Global Regulatory Divergence # The United States, European Union, and China are the world’s three dominant AI powers. Together they produce most frontier AI research, deploy most commercial AI systems, and shape most global AI policy. Yet their approaches to AI regulation, and particularly AI liability, are strikingly different.
As AI systems become integral to commerce, healthcare, and daily life, jurisdictions worldwide are racing to establish regulatory frameworks. The approaches vary dramatically, from the EU’s comprehensive risk-based legislation to the UK’s sector-specific principles, from China’s content-focused rules to Canada’s failed attempt at comprehensive AI law. Understanding these frameworks is essential for any organization deploying AI across borders.
AI in Drug Discovery: The New Liability Frontier # Artificial intelligence is transforming pharmaceutical development at unprecedented scale. The AI drug discovery market has grown to approximately $2.5-7 billion in 2025, with projections reaching $16-134 billion by 2034 depending on the analysis. AI-discovered molecules reportedly achieve an 80-90% success rate in Phase I trials, substantially higher than traditional discovery methods.
AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?