Skip to main content
  1. AI Standard of Care by Industry/

Advertising & Marketing AI Standard of Care

Table of Contents

Artificial intelligence has transformed advertising from an art into a science, and a potential legal minefield. AI systems now write ad copy, generate images, target consumers with unprecedented precision, and even create synthetic spokespersons that never existed. This power comes with significant legal risk: the FTC has made clear that AI-generated deception is still deception, and traditional advertising law applies with full force to automated campaigns.

The standard of care for AI in advertising is evolving rapidly, but the foundation is clear: AI does not provide immunity from advertising law. Advertisers, agencies, platforms, and AI tool providers all face potential liability when AI-powered campaigns deceive consumers, violate privacy, or cause other legally cognizable harms.

$5.8B
FTC Penalties
Advertising enforcement (2023)
88%
Marketers Using AI
Content generation (2024 survey)
$500K+
TCPA Settlements
AI robocall cases (avg.)
32
State AG Actions
Deceptive AI advertising (2023-24)

AI Applications in Advertising & Marketing
#

Programmatic Advertising
#

AI drives the real-time bidding systems that dominate digital advertising:

ComponentAI FunctionLegal Considerations
Real-time biddingMillisecond ad placement decisionsBrand safety, context appropriateness
Audience targetingIdentify high-value consumersDiscrimination, privacy, manipulation
Dynamic pricingAdjust bids based on predicted valuePrice discrimination concerns
Attribution modelingCredit conversions to touchpointsAccuracy of performance claims
Fraud detectionIdentify invalid trafficAdvertiser protection duties

Programmatic systems make billions of decisions daily without human review, each one potentially creating liability.

Generative AI Content Creation
#

Large language models and image generators now create advertising content at scale:

Text generation:

  • Ad copy and headlines
  • Email marketing campaigns
  • Social media posts
  • Product descriptions
  • Landing page content

Image generation:

  • Product imagery
  • Lifestyle photographs
  • Background and composite images
  • Social media graphics
  • Banner advertisements

Video generation:

  • Synthetic spokesperson videos
  • Product demonstrations
  • Animated advertisements
  • Social media content
FTC Warning on AI-Generated Content
The FTC has explicitly warned that AI-generated content, including images, videos, and text, must comply with the same truth-in-advertising standards as human-created content. Using AI to generate false testimonials, fake reviews, or misleading product demonstrations is illegal regardless of the technology used to create them.

Personalization and Targeting
#

AI enables hyper-personalized advertising that raises legal questions:

  • Behavioral targeting based on browsing history
  • Predictive modeling of consumer preferences
  • Lookalike audiences derived from existing customers
  • Dynamic creative optimization showing different ads to different users
  • Contextual targeting based on content consumption

AI-Powered Customer Interactions
#

AI increasingly handles direct consumer communications:

  • Chatbots providing product information
  • Virtual assistants answering questions
  • Automated email responses
  • AI phone systems handling inquiries
  • Social media bots engaging with consumers

FTC Regulatory Framework
#

Section 5 Unfair and Deceptive Acts
#

The FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce.” This applies fully to AI-powered advertising:

Deception standard:

  • Material representation, omission, or practice
  • Likely to mislead consumers acting reasonably
  • Representation must be material (affects purchasing decision)

AI-specific applications:

  • AI-generated fake testimonials are deceptive
  • Undisclosed AI personas deceive consumers
  • Synthetic media creating false impressions violates Section 5
  • AI-optimized dark patterns may constitute unfair practices

FTC AI Guidance and Enforcement Signals
#

The FTC has issued extensive guidance on AI in advertising:

April 2023 - “Keep your AI claims in check”:

  • Warned against making false or unsubstantiated AI capability claims
  • Applies to marketing AI products and using AI in marketing
  • Established that “AI” claims require substantiation

February 2024 - Synthetic media guidance:

  • AI-generated images must not deceive consumers
  • Fake reviews and testimonials violate existing law
  • Deepfakes in advertising face enhanced scrutiny

August 2024 - Final Rule on fake reviews:

  • Explicitly prohibits AI-generated fake reviews and testimonials
  • Bans buying or selling fake reviews
  • Civil penalties up to $50,000 per violation
FTC Penalty Authority
Under the FTC’s penalty authority, violations of trade regulation rules can result in civil penalties of up to $50,120 per violation (as of 2024). With AI systems generating millions of ads, the potential exposure is enormous.

Endorsement and Testimonial Guidelines
#

The FTC’s Endorsement Guides apply directly to AI-generated endorsements:

Requirements:

  • Endorsements must reflect honest opinions
  • Material connections must be disclosed
  • Results must be typical or atypical results disclosed
  • Fake endorsers, including AI personas, violate the Guidelines

AI-specific considerations:

  • AI-generated “customer testimonials” must reflect real customer experiences
  • Synthetic influencers must disclose their artificial nature
  • AI personas cannot endorse products they haven’t “used”

Health and Safety Claims
#

The FTC applies heightened scrutiny to health and safety advertising:

  • AI cannot generate unsubstantiated health claims
  • Medical AI advertising must meet rigorous proof standards
  • Safety claims require competent and reliable scientific evidence
  • “AI-powered” health products face particular scrutiny

Deepfake Advertising Liability
#

The Deepfake Advertising Problem
#

AI now enables creation of synthetic video featuring real people or entirely fabricated personas:

Deepfake applications in advertising:

  • Celebrity endorsements without consent
  • Synthetic spokespersons that appear real
  • Manipulated product demonstrations
  • Fake user-generated content campaigns
  • Fraudulent influencer content

Legal Theories for Deepfake Liability#

Multiple legal theories apply to deepfake advertising:

Legal TheoryApplicationPotential Plaintiffs
Right of publicityUnauthorized use of likenessCelebrities, public figures
False endorsement (Lanham Act)Misleading about endorsementDepicted individuals
Consumer protectionDeceiving consumersState AGs, consumers, FTC
DefamationFalse statements harming reputationDepicted individuals
FraudIntentional deception for gainConsumers, competitors

State Deepfake Laws
#

States have enacted specific deepfake legislation:

Texas (2019):

  • Criminalizes deepfakes intended to influence elections
  • Private right of action for depicted individuals

California (2019-2020):

  • AB 730: Election-related deepfakes
  • AB 602: Non-consensual deepfake pornography
  • Civil and criminal penalties

New York (2023):

  • Extended right of publicity to digital replicas
  • Includes AI-generated likenesses

Federal proposals:

  • DEEPFAKES Accountability Act (proposed)
  • NO FAKES Act (proposed, 2024)
  • Bipartisan momentum for federal regulation

Case Study: Tom Hanks AI Dental Ad
#

In October 2023, Tom Hanks posted on Instagram warning fans about an AI-generated advertisement using his likeness to promote a dental plan, without his authorization. While no lawsuit was filed publicly, the incident highlighted:

  • AI can create convincing celebrity endorsements
  • Brands may use (or enable use of) unauthorized deepfakes
  • Even famous individuals struggle to prevent misuse
  • Consumer deception is immediate and widespread

TCPA and AI Communications
#

Telephone Consumer Protection Act Framework
#

The TCPA regulates automated calls and texts, directly implicating AI communications:

TCPA prohibitions:

  • Robocalls to cell phones without consent
  • Prerecorded messages to residential lines without consent
  • Texts via auto-dialer without consent
  • Fax advertising without consent

AI applications:

  • AI-generated voice calls (synthetic voices)
  • AI chatbot text campaigns
  • Automated text message marketing
  • AI-powered customer outreach

FCC AI Voice Ruling (February 2024)
#

In February 2024, the FCC ruled that AI-generated voice calls are “artificial” under the TCPA:

Key holdings:

  • AI-cloned voices constitute “artificial” prerecorded voices
  • TCPA’s robocall restrictions apply to AI voice technology
  • Consent requirements unchanged
  • Violations subject to $500-$1,500 per call penalties
AI Voice Call Liability
With TCPA penalties of $500-$1,500 per violation, an AI system making thousands of unauthorized calls can create millions of dollars in liability. Several class actions have already been filed against companies using AI voice technology without proper consent.

AI Chatbot Compliance
#

AI chatbots in marketing must comply with:

  • TCPA text messaging rules, Consent required for marketing texts
  • CAN-SPAM, Commercial email requirements apply to AI
  • State laws, Many states have additional consent requirements
  • Platform rules, Social media platforms have bot disclosure requirements

Targeted Advertising Discrimination
#

Civil Rights Implications
#

AI-powered ad targeting can violate civil rights laws:

Housing advertising (Fair Housing Act):

  • Cannot exclude based on race, color, religion, sex, familial status, national origin, disability
  • AI targeting that achieves same effect as intentional exclusion violates FHA
  • Facebook settled for $5M+ over discriminatory housing ad targeting

Employment advertising (Title VII):

  • Job ads cannot target/exclude based on protected characteristics
  • AI optimization may inadvertently discriminate
  • EEOC has signaled enforcement interest

Credit advertising (ECOA):

  • Equal Credit Opportunity Act prohibits discrimination
  • Targeted credit ads that exclude protected groups face liability

Meta/Facebook Settlement Precedent
#

The Department of Justice settlement with Meta over housing ad discrimination established:

  • AI ad targeting can violate civil rights laws
  • Advertisers and platforms face liability
  • “Lookalike audiences” can perpetuate discrimination
  • Compliance requires affirmative testing for disparate impact

Digital Redlining in Advertising
#

AI systems trained on historical data may perpetuate advertising exclusion:

  • Ads for opportunities may not reach minority communities
  • “Optimization” for engagement may exclude older/disabled users
  • Lookalike modeling from biased data reproduces bias
  • Geographic targeting can serve as proxy for race

Influencer Marketing and AI
#

Virtual Influencers
#

AI-generated “influencers” with no human behind them raise novel questions:

Disclosure requirements:

  • Must virtual influencers disclose they’re AI?
  • What constitutes adequate disclosure?
  • Can AI “authentically” recommend products?

Examples:

  • Lil Miquela (AI influencer with millions of followers)
  • Various brand-created AI personas
  • AI-generated “employees” and spokespersons

FTC position: Material information that would affect consumer decisions must be disclosed. If an influencer’s artificial nature is material, which it often is, disclosure is required.

AI-Enhanced Human Influencers
#

AI also assists human influencers in ways requiring disclosure:

  • AI-generated portions of content
  • AI-written scripts and captions
  • AI-enhanced images and videos
  • AI-managed engagement and responses

Platform and Advertiser Liability
#

Advertiser Primary Liability
#

Advertisers bear primary responsibility for AI-powered campaigns:

Due diligence requirements:

  • Verify AI-generated claims are truthful
  • Ensure AI content doesn’t deceive consumers
  • Test targeting for discriminatory effects
  • Maintain records of AI-generated materials

Agency Liability
#

Advertising agencies face potential liability for:

  • Recommending AI tools that generate deceptive content
  • Failing to review AI-generated materials
  • Negligent implementation of AI campaigns
  • Knowing participation in deceptive practices

Platform Liability
#

Ad platforms have potential exposure:

Section 230 protection:

  • Generally protects platforms from liability for user content
  • Does not protect platforms’ own advertising products
  • Algorithmic amplification may not be protected

Direct liability:

  • Platforms’ targeting tools can create direct liability
  • Knowledge of discriminatory patterns creates duty
  • FTC has pursued platforms directly

Compliance Framework for AI Marketing
#

Pre-Campaign Requirements
#

Before launching AI-powered advertising:

  1. Substantiate all claims, AI-generated claims need the same substantiation as human-created claims
  2. Review for deception, Manual review of AI-generated content before publication
  3. Test for discrimination, Analyze targeting for disparate impact
  4. Verify endorsements, Ensure any endorsements reflect real opinions
  5. Obtain proper consent, TCPA and state law compliance for communications
  6. Document AI use, Maintain records of what AI generated

Ongoing Monitoring
#

During campaigns:

  • Monitor for drift, AI systems may generate problematic content over time
  • Track complaints, Consumer complaints may signal problems
  • Audit targeting, Regularly test for discriminatory delivery
  • Update consent, Ensure consent remains valid as campaigns evolve

Disclosure Best Practices
#

When disclosure is required:

  • Clear and conspicuous, Consumers must actually notice disclosure
  • Plain language, Avoid jargon and legalese
  • Proximity, Disclosure near the claim it modifies
  • Unavoidable, Cannot be hidden or easily missed

For AI-generated content specifically:

  • Disclose when reasonable consumer would want to know
  • Consider disclosing AI involvement in image/video creation
  • Virtual influencers should disclose artificial nature
  • AI-written testimonials should not be presented as human-written

State and International Requirements
#

State Consumer Protection Laws
#

State attorneys general actively enforce against AI advertising:

California:

  • Unfair Competition Law (Bus. & Prof. Code § 17200)
  • Consumer Legal Remedies Act
  • Automatic Renewal Law (applicable to AI-sold subscriptions)
  • Bot Disclosure Law (requiring bot identification online)

New York:

  • General Business Law § 349-350
  • Active AG enforcement on digital advertising

Multiple states:

  • Mini-FTC Acts in all 50 states
  • State AG consumer protection divisions
  • Coordination through NAAG (National Association of Attorneys General)

GDPR and International
#

For companies advertising internationally:

EU GDPR:

  • Consent requirements for targeted advertising
  • Profiling restrictions and right to opt out
  • Data minimization requirements
  • Right to explanation of automated decisions

EU AI Act:

  • High-risk classification for certain AI systems
  • Transparency requirements for AI-generated content
  • Upcoming requirements for emotion recognition and biometric AI

UK, Canada, Australia:

  • Similar consumer protection frameworks
  • Increasing focus on AI advertising

Frequently Asked Questions
#

Can I use AI to generate customer testimonials?

No. The FTC’s final rule on fake reviews explicitly prohibits AI-generated fake testimonials. Testimonials must reflect the genuine opinions and experiences of real customers. Using AI to create fictional customers or to fabricate testimonial content violates FTC rules and can result in penalties up to $50,000 per violation. You can use AI to help real customers articulate their experiences, but the underlying opinion must be genuine.

Do I need to disclose that ad copy was AI-generated?

Not always, but sometimes. The FTC requires disclosure of information material to consumers. If the AI-generated nature of content would affect a consumer’s purchasing decision, such as AI-generated images that make products look different from reality, or AI-written “expert” recommendations, disclosure may be required. When in doubt, disclose. The trend is toward more disclosure requirements, not fewer.

Can AI-targeted advertising violate civil rights laws?

Yes. The Fair Housing Act, Title VII, and Equal Credit Opportunity Act apply to advertising. If AI targeting excludes protected groups from seeing housing, employment, or credit ads, even without intentional discrimination, liability can result. Meta paid over $5 million to settle claims that its AI ad targeting discriminated in housing advertising. Test your targeting systems for disparate impact.

What are the rules for AI robocalls and text messages?

The TCPA requires prior express consent for robocalls to cell phones and prerecorded calls to residential lines. In February 2024, the FCC ruled that AI-generated voices are “artificial” under the TCPA, so AI voice calls face the same restrictions as traditional robocalls. Violations can result in $500-$1,500 per call in statutory damages. Class actions in this area have resulted in settlements in the hundreds of millions.

Can I create an AI influencer to promote my products?

You can, but with significant caveats. The AI nature of the influencer must be disclosed if it’s material to consumers, which it usually is, since consumers value authentic human recommendations. The FTC’s Endorsement Guides require that endorsements reflect honest opinions of the endorser; AI cannot have opinions. Virtual influencers should be clearly identified as AI-generated characters, not presented as real humans.

Who is liable if AI generates a defamatory or false advertisement?

The advertiser bears primary liability regardless of whether AI generated the content. “The AI wrote it” is not a defense. Advertisers must review AI-generated content before publication and take responsibility for its accuracy. Agencies that recommend or implement AI tools may share liability. The AI tool provider may have liability in some circumstances, particularly if the tool was marketed as safe for advertising use.

Related Resources#

On This Site
#

External Resources
#


Navigating AI Advertising Compliance?

From FTC enforcement against AI-generated content to TCPA liability for AI voice calls to civil rights implications of targeted advertising, AI marketing faces unprecedented legal complexity. Whether you're an advertiser deploying AI tools, an agency implementing AI campaigns, or a platform providing AI advertising technology, understanding the evolving standard of care is essential. Connect with professionals who understand the intersection of advertising law, AI technology, and consumer protection.

Get Expert Guidance

Related

Retail & E-Commerce AI Standard of Care

Retail and e-commerce represent one of the largest deployments of consumer-facing AI systems in the economy. From dynamic pricing algorithms that adjust millions of prices in real-time to recommendation engines that shape purchasing decisions, AI now mediates the relationship between retailers and consumers at virtually every touchpoint.

Telecommunications AI Standard of Care

Telecommunications sits at the intersection of AI deployment and AI-enabled harm. Carriers deploy sophisticated AI for network management, fraud detection, and customer service, while simultaneously serving as the conduit for AI-powered robocalls, voice cloning scams, and deepfake communications. This dual role creates complex liability exposure.

Accounting & Auditing AI Standard of Care

The accounting profession stands at a transformative moment. AI systems now analyze millions of transactions for audit evidence, prepare tax returns, detect fraud patterns, and generate financial reports. These tools promise unprecedented efficiency and insight, but they also challenge fundamental professional standards. When an AI misses a material misstatement, does the auditor’s professional judgment excuse liability? When AI-prepared tax returns contain errors, who bears responsibility?

Architecture & Engineering AI Standard of Care

Architecture and engineering stand at the frontier of AI transformation. Generative design algorithms now propose thousands of structural options in minutes. Machine learning analyzes stress patterns that would take human engineers weeks to evaluate. Building information modeling systems automate coordination between disciplines. AI code compliance tools promise to catch violations before construction begins.

Childcare & Early Education AI Standard of Care

Artificial intelligence has entered the world of childcare and early education, promising to enhance child safety, support developmental assessment, and improve educational outcomes. AI-powered cameras now monitor sleeping infants for signs of distress. Algorithms assess toddlers’ developmental milestones and flag potential delays. Learning platforms adapt to young children’s emerging skills and interests.