Artificial intelligence is reshaping journalism and media at every level, from AI systems that write earnings reports and sports recaps to deepfake technology that can fabricate video of events that never occurred. This transformation brings profound questions: When an AI “hallucinates” false facts in a news article, who bears liability for defamation? When AI-generated content spreads misinformation that causes real-world harm, what standard of care applies?
The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.
The Central Question # Does Section 230 of the Communications Decency Act:“the 26 words that created the internet”, protect AI companies from liability for content their systems generate?