Skip to main content
  1. AI Standard of Care by Industry/

Journalism & Media AI Standard of Care

Table of Contents

Artificial intelligence is reshaping journalism and media at every level, from AI systems that write earnings reports and sports recaps to deepfake technology that can fabricate video of events that never occurred. This transformation brings profound questions: When an AI “hallucinates” false facts in a news article, who bears liability for defamation? When AI-generated content spreads misinformation that causes real-world harm, what standard of care applies?

The legal framework is still emerging, but one principle is clear: AI does not eliminate editorial responsibility. Publishers who deploy AI for news generation remain liable for the content they publish, and the traditional standards of journalism, accuracy, verification, fairness, apply regardless of whether humans or machines produce the initial draft.

75%
Newsrooms Using AI
Content assistance (2024 survey)
$50M+
Defamation Verdict
AI-implicated case settlement range
1,000+
AI Articles/Day
Major wire services (estimated)
85%
Can't Identify AI
Readers detecting AI content

AI Applications in Journalism & Media
#

Automated News Generation
#

AI systems now produce substantial volumes of news content:

ApplicationExamplesScale
Financial reportingEarnings summaries, market updatesThousands daily
Sports coverageGame recaps, statistics summariesExtensive automation
Weather reportingForecasts, storm coverageLargely automated
Data journalismElection results, census dataReal-time generation
Local newsCrime reports, real estate transactionsFilling coverage gaps

Major implementations:

  • Associated Press: Uses AI for earnings reports since 2014
  • Washington Post: “Heliograf” system for elections and sports
  • Bloomberg: AI-generated financial news
  • Reuters: Automated news and Lynx Insight AI

AI-Assisted Reporting
#

Beyond full automation, AI assists human journalists:

  • Research and background, AI summarizing documents and sources
  • Transcription, Automated interview transcription
  • Translation, Real-time translation for international coverage
  • Fact-checking assistance, AI flagging potential inaccuracies
  • Source identification, AI finding relevant sources and experts
  • Data analysis, Pattern recognition in large datasets
The Hybrid Model
Most news organizations use AI as an assistant to human journalists rather than a replacement. The AI drafts, summarizes, or identifies information, but human editors review and approve. This hybrid model attempts to capture efficiency gains while maintaining editorial oversight, but the adequacy of that oversight defines the standard of care.

Content Moderation
#

AI moderates user-generated content at scale:

  • Comment filtering, Automated moderation of reader comments
  • Misinformation detection, AI flagging false claims
  • Hate speech removal, Automated content policy enforcement
  • Recommendation algorithms, AI deciding what content users see
  • Trending topic curation, AI selecting newsworthy topics

Synthetic Media Production
#

AI generates increasingly sophisticated media:

  • AI-generated images, Illustrations, stock photography
  • Synthetic video, AI-created video content, virtual anchors
  • Voice synthesis, AI-generated voiceovers and podcasts
  • Interactive graphics, AI-generated data visualizations
  • Personalized content, Different versions for different audiences

Defamation Liability for AI-Generated Content
#

Traditional Defamation Standards
#

Defamation law applies fully to AI-generated content:

Elements of defamation:

  1. False statement of fact
  2. Published to third parties
  3. Concerning the plaintiff
  4. Causing damage to reputation
  5. Fault (negligence or actual malice)

For public figures: Must prove “actual malice”, knowledge of falsity or reckless disregard for truth

For private figures: Generally must prove negligence

AI “Hallucination” as Defamation
#

AI language models frequently generate false statements presented as fact:“hallucinations” that can constitute defamation:

Notable examples:

  • ChatGPT falsely stating a law professor sexually harassed students on a trip that never occurred
  • AI summaries falsely attributing criminal conduct to named individuals
  • AI-generated biographical information containing false accusations
  • Automated news systems publishing incorrect information about real people
The Hallucination Problem
Large language models do not distinguish between truth and falsehood, they generate statistically plausible text. When that text includes false factual statements about real people, the result can be actionable defamation. The AI’s inability to verify facts does not excuse the publisher who uses it.

Publisher Liability for AI Content
#

When AI generates defamatory content, liability typically flows to:

The publisher:

  • News organizations publishing AI-generated articles
  • Platforms featuring AI-generated content
  • Companies using AI for public communications
  • Anyone who “adopts” AI output by publishing it

The “republication” principle: Each publication of defamatory content is a separate act of defamation. A publisher cannot escape liability by claiming the AI generated the false statement.

Emerging AI Defamation Cases
#

Several cases have addressed AI-generated defamation:

Walters v. OpenAI (2023):

  • Radio host Mark Walters sued after ChatGPT falsely stated he embezzled from a gun rights organization
  • Case raised questions about AI tool provider liability
  • Settled/dismissed, but established AI defamation as viable claim

Multiple pending cases:

  • Lawsuits against AI companies for false biographical information
  • Claims against publishers using AI without adequate verification
  • Cases involving AI summaries of court proceedings

Defenses and Limitations
#

Potential defenses in AI defamation cases:

DefenseApplication to AI
TruthIf AI-generated statement is true, no liability
OpinionOpinion clearly labeled as AI-generated may be protected
Fair report privilegeAI accurately reporting official proceedings
Section 230May protect platforms for user AI-generated content
Retraction statutesPrompt correction may limit damages

Section 230 limitations:

  • Protects platforms from user content liability
  • Does NOT protect a publisher’s own AI-generated content
  • Does NOT protect if publisher exercises editorial control
  • Increasingly narrowly interpreted by courts

Misinformation and Disinformation Liability
#

The AI Misinformation Amplification Problem
#

AI systems can generate and spread misinformation at unprecedented scale:

Generation:

  • AI creates plausible-seeming false content
  • Deepfakes fabricate events that never occurred
  • AI “news sites” publish entirely fabricated stories
  • Synthetic voices impersonate real people

Amplification:

  • Recommendation algorithms promote engaging (often false) content
  • AI-powered social media bots spread misinformation
  • Automated curation may favor sensational falsehoods
  • Personalization creates filter bubbles reinforcing false beliefs

Legal Theories for Misinformation Harm#

While pure misinformation is often protected speech, liability can attach when:

Tortious conduct:

  • Defamation, False statements harming specific individuals
  • Fraud, Knowing falsehoods causing financial harm
  • Intentional infliction of emotional distress, Extreme conduct causing severe distress
  • Negligence, When duty of care exists (e.g., fiduciary relationships)

Statutory violations:

  • Election laws, False statements about voting procedures, candidates
  • Consumer protection, Deceptive practices affecting consumers
  • Securities laws, Market manipulation through false information
  • Public health, Some states have health misinformation laws

Election Interference Concerns
#

AI-generated election misinformation faces particular scrutiny:

Potential violations:

  • Voter intimidation or suppression
  • False information about voting procedures
  • Deepfakes of candidates
  • AI-generated political advertisements

State laws:

  • California, Texas, others criminalize election-related deepfakes
  • Disclosure requirements for AI-generated political content
  • Restrictions on misleading election communications
FCC AI Voice Ruling Impact
The FCC’s February 2024 ruling that AI-generated voices are “artificial” under TCPA strengthened enforcement against AI robocalls spreading election misinformation. The infamous “Biden robocall” in New Hampshire, using AI-cloned voice to discourage voting, resulted in enforcement action and demonstrated the regulatory response to AI election interference.

Platform Content Moderation Liability
#

Platforms face pressure from multiple directions:

Over-moderation claims:

  • First Amendment concerns (for government-compelled moderation)
  • Breach of contract (if platforms promise openness)
  • Anti-conservative bias allegations (political)
  • Competitive harm from removing legitimate content

Under-moderation claims:

  • Negligence for foreseeable harms from content
  • Aiding and abetting illegal conduct
  • Public nuisance
  • State consumer protection laws

Editorial Standards for AI Journalism
#

Emerging Industry Standards
#

News organizations are developing AI-specific editorial policies:

AP Style Guidelines:

  • AI-generated content must be clearly labeled
  • Human editors must review AI outputs before publication
  • AI cannot be listed as an author or source
  • Verification requirements unchanged

New York Times Standards:

  • Disclosure of AI assistance in reporting
  • Prohibition on AI-generated images in news coverage
  • Human editorial judgment required for publication decisions
  • Training data transparency considerations

Reuters Standards:

  • AI-assisted reporting requires editor approval
  • Synthetic media prohibited in news photography
  • Clear labeling of AI-generated content
  • Human accountability for all published content

The “Reasonable Publisher” Standard
#

Courts assessing journalism negligence look to industry standards:

What would a reasonable publisher do when using AI?

  • Verify AI-generated facts before publication
  • Maintain human editorial oversight
  • Disclose AI involvement to audiences
  • Implement quality control processes
  • Correct errors promptly when discovered
  • Train staff on AI limitations

Failure to meet industry standards can establish negligence in defamation cases against media defendants.

Disclosure and Transparency
#

Leading organizations are adopting disclosure practices:

Content labeling:

  • “This article was generated with AI assistance”
  • “AI was used to analyze data for this report”
  • “This summary was created by an AI system”

Process transparency:

  • Explanations of how AI is used in newsroom
  • Documentation of human oversight processes
  • Public AI ethics policies

Copyright and AI Content Issues#

AI Training on News Content
#

News organizations face copyright questions from AI training:

Pending litigation:

  • New York Times v. OpenAI and Microsoft (filed December 2023)
  • Other publishers considering or filing suits
  • Questions of fair use, licensing, and damages

Key issues:

  • Did AI companies infringe by training on copyrighted news?
  • Should publishers be compensated for training data?
  • Can AI generate content substantially similar to sources?

AI-Generated Content Copyright#

Who owns copyright in AI-generated journalism?

Current law:

  • Copyright requires human authorship
  • Purely AI-generated content may not be copyrightable
  • Human selection, arrangement, editing may create copyright
  • Uncertainty creates risk for news organizations

Practical implications:

  • AI-generated articles may not have copyright protection
  • Others may freely copy AI-generated news content
  • Human contribution should be documented
  • Copyright registration challenges for AI content

Synthetic Media and Source Rights
#

AI-generated images and video raise additional issues:

  • Training data rights, Were source images properly licensed?
  • Output similarity, Does generated content infringe sources?
  • Right of publicity, AI depicting real people without consent
  • Moral rights, Attribution and integrity concerns

Specific Media Sector Considerations
#

Broadcast News
#

Television and radio news face particular AI considerations:

  • Synthetic anchors, AI-generated news presenters (used in some countries)
  • Voice cloning, AI reproducing recognizable anchor voices
  • Real-time generation, AI creating live captions and summaries
  • Deepfake detection, Verifying video authenticity

Social Media and Digital Platforms
#

Social platforms as news distributors:

  • Algorithm curation, AI deciding what news users see
  • Misinformation amplification, Engagement optimization vs. accuracy
  • Bot networks, AI-powered accounts spreading content
  • Recommendation liability, Responsibility for algorithmic promotion

Documentary and Long-Form
#

Documentary production using AI:

  • Archival restoration, AI enhancing historical footage
  • Synthetic recreation, AI generating scenes that weren’t filmed
  • Voice reconstruction, AI recreating voices of deceased individuals
  • Ethical boundaries, When does AI cross from restoration to fabrication?

Risk Management for Media Organizations
#

Pre-Publication Review
#

For AI-assisted content:

  1. Fact verification, Independent verification of AI-stated facts
  2. Source checking, Confirm AI-cited sources exist and say what claimed
  3. Name verification, Ensure people mentioned exist and descriptions accurate
  4. Legal review, Flag potential defamation, privacy, copyright issues
  5. Disclosure decisions, Determine what AI involvement to disclose

Staff Training Requirements
#

Journalists using AI need training on:

  • AI limitations, Understanding hallucination, bias, errors
  • Verification techniques, How to check AI-generated content
  • Ethical guidelines, Organization’s AI use policies
  • Legal exposure, Defamation, copyright, and other risks
  • Disclosure requirements, When and how to disclose AI use

Error Correction Protocols
#

When AI-generated content contains errors:

  • Rapid response, Faster correction reduces damages
  • Prominent correction, Clear acknowledgment of error
  • Root cause analysis, Why did error occur?
  • System improvement, Prevent similar errors
  • Documentation, Record correction for legal defense

Frequently Asked Questions
#

Can a news organization be sued for defamation over AI-generated content?

Yes. Publishers bear full responsibility for content they publish, regardless of whether AI or humans created it. When AI generates false statements of fact about identifiable individuals, the publisher faces potential defamation liability. The AI’s “hallucination” or error is not a defense, publishers must verify AI-generated facts before publication, just as they would verify any other source.

Does Section 230 protect AI-generated news content?

Generally no. Section 230 protects platforms from liability for content created by third-party users, but does not protect a publisher’s own content. If a news organization uses AI to generate articles that it then publishes under its own name, Section 230 does not apply. The organization is the creator, not the host, of that content.

Must news organizations disclose when content is AI-generated?

No federal law currently requires disclosure, but industry standards increasingly call for it. The AP, New York Times, Reuters, and other major outlets have adopted disclosure policies. Failure to disclose may affect credibility and, in some cases, could be relevant to fraud or deceptive practices claims. Several states are considering or have passed disclosure requirements for certain types of AI content.

Who is liable when AI creates a deepfake news video?

Liability depends on the circumstances but potentially includes: the creator of the deepfake, the platform that hosts it, the AI tool provider (in some cases), and anyone who knowingly republishes it. For fabricated news events, claims could include defamation (if individuals are depicted), fraud, intentional infliction of emotional distress, or violations of state deepfake laws. Election-related deepfakes face additional legal exposure.

Can AI-generated news articles be copyrighted?

This is unsettled law. The Copyright Office has stated that copyright requires human authorship, so purely AI-generated content may not be copyrightable. However, if humans contribute creative elements, selecting topics, editing content, arranging material, copyright may exist in those human contributions. News organizations should document human involvement and consult counsel on specific situations.

What standard of care applies to AI journalism?

The standard is evolving but builds on traditional journalism ethics. A reasonable publisher using AI would: verify AI-generated facts before publication, maintain human editorial oversight, disclose AI involvement appropriately, implement quality control processes, correct errors promptly, and train staff on AI limitations. Failure to meet industry-standard practices can establish negligence in legal proceedings.

Related Resources#

On This Site
#

Professional Resources
#


Navigating AI in News and Media?

From defamation liability for AI-generated content to copyright questions about AI training data to editorial standards for AI-assisted reporting, news organizations face unprecedented legal complexity. Whether you're a publisher implementing AI tools, a journalist using AI assistance, or a platform distributing AI-generated content, understanding the emerging standard of care is essential. Connect with professionals who understand the intersection of media law, AI technology, and journalism ethics.

Get Expert Guidance

Related

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

Telecommunications AI Standard of Care

Telecommunications sits at the intersection of AI deployment and AI-enabled harm. Carriers deploy sophisticated AI for network management, fraud detection, and customer service, while simultaneously serving as the conduit for AI-powered robocalls, voice cloning scams, and deepfake communications. This dual role creates complex liability exposure.

Accounting & Auditing AI Standard of Care

The accounting profession stands at a transformative moment. AI systems now analyze millions of transactions for audit evidence, prepare tax returns, detect fraud patterns, and generate financial reports. These tools promise unprecedented efficiency and insight, but they also challenge fundamental professional standards. When an AI misses a material misstatement, does the auditor’s professional judgment excuse liability? When AI-prepared tax returns contain errors, who bears responsibility?

Advertising & Marketing AI Standard of Care

Artificial intelligence has transformed advertising from an art into a science, and a potential legal minefield. AI systems now write ad copy, generate images, target consumers with unprecedented precision, and even create synthetic spokespersons that never existed. This power comes with significant legal risk: the FTC has made clear that AI-generated deception is still deception, and traditional advertising law applies with full force to automated campaigns.

Architecture & Engineering AI Standard of Care

Architecture and engineering stand at the frontier of AI transformation. Generative design algorithms now propose thousands of structural options in minutes. Machine learning analyzes stress patterns that would take human engineers weeks to evaluate. Building information modeling systems automate coordination between disciplines. AI code compliance tools promise to catch violations before construction begins.