Skip to main content
  1. AI Standard of Care by Industry/

Education AI Standard of Care

Table of Contents

AI in Education: An Emerging Liability Crisis
#

Educational institutions face a rapidly expanding wave of AI-related litigation. From proctoring software that disproportionately flags students of color, to AI detection tools that falsely accuse students of cheating, to massive data collection on minors, schools, testing companies, and technology vendors now confront significant liability exposure. The stakes extend beyond financial damages: these cases implicate fundamental questions of educational access, disability accommodation, and civil rights.

Related: See our guide to AI in Education Standards for emerging norms on AI tutoring, assessment policies, and international frameworks from UNESCO and OECD.

The Scale of AI in Education
#

AI tools have become ubiquitous in K-12 and higher education:

  • Nearly 70% of K-12 schools use Google Workspace for Education
  • 1 in 3 students now uses AI tools regularly, according to education research
  • Over 50 million students use Google Chromebooks in educational settings
  • Millions of students have been subjected to AI-powered proctoring since the COVID-19 pandemic

This rapid adoption has created a patchwork of policies, inconsistent enforcement, and a growing body of litigation testing the boundaries of institutional liability.

Major Litigation and Enforcement Actions
#

Meazure Learning / ProctorU Bar Exam Disaster (2025)
#

The February 2025 California Bar Exam became a landmark case study in proctoring technology failure and institutional liability.

What Happened:

On February 25-26, 2025, approximately 4,600 candidates sat for California’s first hybrid bar exam, with remote testing administered by ProctorU (operating as Meazure Learning). The platform experienced catastrophic failures:

  • Servers crashed repeatedly, displaying “website under heavy load” messages
  • Test-takers were disconnected mid-exam, losing work
  • Essential features, highlighters, notepads, copy/paste, spell check, were inaccessible
  • Some candidates couldn’t enter answers or save their work at all

The Lawsuits:

  1. State Bar of California v. Meazure Learning (May 2025)

    • The State Bar filed suit in Los Angeles Superior Court
    • Claims include fraud, negligent misrepresentation, and breach of contract
    • The complaint alleges Meazure promised 99.982% uptime and capacity for 25,000 simultaneous test-takers, promises contradicted by actual performance
    • Seeks compensatory and punitive damages
  2. McDowell v. ProctorU (Class Action) (2025)

    • Proposed class action on behalf of affected test-takers
    • Alleges Meazure failed to allocate sufficient server capacity
    • Seeks damages for harm to test results and career prospects

Legal Significance:

These cases establish that testing vendors may face direct liability when their technology fails examinees, not just contractual claims from institutional clients. The State Bar’s $4.1 million contract with Meazure demonstrates the scale of commercial relationships at stake.

EPIC Complaint Against Proctoring Companies (2020-Present)
#

The Electronic Privacy Information Center’s complaint against the five largest proctoring providers established the template for challenging AI-based academic surveillance.

Companies Named:

  • Respondus
  • ProctorU
  • Proctorio
  • Examity
  • Honorlock

Four Categories of Violations Alleged:

  1. Unfair and Deceptive Collection of Excessive Personal Data

    • Proctoring software collects webcam footage, audio recordings, screen activity, browser history, and biometric identifiers
    • EPIC alleges this collection far exceeds what’s necessary to detect cheating
  2. Unfair Use of Opaque, Unproven AI Systems

    • AI algorithms flag “suspicious” behavior including eye movements, background noise, and physical movements
    • These systems have never been validated for accuracy or bias
    • No transparency about how flagging decisions are made
  3. Deceptive Uses of Facial Recognition

    • Proctorio and Honorlock claim facial recognition capabilities
    • Research shows facial recognition performs worse on darker skin tones
    • Students of color report being repeatedly unable to verify their identity
  4. Deceptive Claims About System Reliability

    • Vendors market their systems as accurate and reliable
    • Independent testing reveals significant error rates and bias

Regulatory Response:

Following EPIC’s complaint, the Federal Trade Commission issued warnings to software companies against invasive student surveillance. The Department of Education’s Office for Civil Rights released guidance in November 2024 specifically addressing discriminatory AI use in education, including proctoring systems.

Rignol v. Yale University - AI Detection False Positive (2025)
#

This federal lawsuit tests whether institutions can discipline students based on AI detection tools known to produce false positives.

The Facts:

  • An Executive MBA student at Yale School of Management submitted a 30-page final exam in May 2024
  • A teaching assistant flagged the submission as “too long and elaborate” with “near perfect punctuation”
  • Yale used GPTZero to analyze the work, with three answers scoring as “high probability” of AI generation
  • The professor did not run the entire exam through the software

Disciplinary Action:

  • The student was suspended for one year
  • Received a failing grade
  • University Honor Committee called it “the worst episode of academic dishonesty we have seen in sixteen years”

The Lawsuit’s Key Allegations:

  1. AI Detection Unreliability

    • The student submitted GPTZero scans of academic papers from Yale scholars, including former President Peter Salovey
    • These scans showed “100% probability” that 30-year-old academic works were AI-generated, obviously false results
    • Yale’s own policies acknowledge “no artificial intelligence tool can detect artificial intelligence use with certainty”
  2. Bias Against Non-Native Speakers

    • The plaintiff, a French entrepreneur, alleges GPTZero disproportionately flags work by non-native English speakers
    • Research supports this concern: AI detection tools struggle with English learned as a second language
  3. Procedural Violations

    • Allegations of pressure to confess, including suggestions his visa could be revoked
    • Claims of irregularities in disciplinary proceedings

Court Proceedings:

  • District Judge Sarah Russell denied the student’s motion for an injunction to allow graduation in May 2025
  • The underlying lawsuit, including claims of breach of contract, discrimination, and emotional distress, remains ongoing

Broader Implications:

Higher education institutions nationwide are using AI detection tools without clear policies or understanding of their limitations. The Yale case may establish precedent for how courts evaluate reliance on these technologies.

Harris v. Hingham Public Schools - K-12 AI Use Discipline (2024)
#

This Massachusetts lawsuit addresses whether schools can punish students for AI use when no clear policy exists.

The Incident:

A high school senior used AI tools to research and create an outline for a history essay about Kareem Abdul-Jabbar’s civil rights activism. The student was paired with another student for the assignment.

Punishments Imposed:

  • Detention
  • Failing grade on the assignment
  • Barred from National Honor Society
  • The faculty advisor called it “the worst episode of academic dishonesty we have seen in sixteen years”

Central Legal Argument:

The lawsuit argues that:

  • Hingham High School had no AI policy during the 2023-24 school year
  • Neither the teacher nor assignment materials prohibited AI use
  • The school only added AI policies to its handbook the year after the punishment

Relief Sought:

  • Raise the student’s grade to a “B”
  • Remove academic sanctions from his record
  • Stop characterizing his AI use as “cheating” or “academic dishonesty”

Policy Implications:

A July 2024 U.S. Department of Education report found only 15 states had developed AI guidance for schools. Massachusetts was not among them. This case may establish that schools cannot retroactively punish conduct that wasn’t clearly prohibited.

Student Data Privacy Litigation
#

Google Workspace for Education Lawsuits (2024-2025)
#

Google faces multiple lawsuits alleging unlawful collection of student data through its education products.

Schwarz v. Google LLC (April 2025)

A proposed class action filed in California federal court alleges:

  • Google’s Workspace for Education collects “thousands of data points that span a child’s life”
  • Neither students nor parents consented to this collection
  • Google embeds hidden tracking technology in Chrome that creates unique digital “fingerprints”
  • The company substitutes school consent for parental consent

Scope of the Problem:

  • Google Workspace for Education is used by nearly 70% of K-12 schools
  • Over 50 million students use Chromebooks
  • 170 million users are on Google Workspace for Education globally

Alleged Violations:

  • Fourth and Fourteenth Amendment rights
  • Federal Wiretap Act
  • California Invasion of Privacy Act

Illinois Biometric Privacy Settlement ($8.75 Million - 2025)

In a separate case, Google agreed to pay $8.75 million to settle claims that it unlawfully captured and stored Illinois students’ biometric data.

Key Details:

  • Preliminary court approval granted May 15, 2025
  • Covers Illinois students who had voice or face models created through Google Workspace for Education
  • Class period: March 26, 2015 through May 15, 2025
  • Estimated individual payments: $30-$100 depending on claims

Pattern of Enforcement:

Google has faced repeated privacy actions involving children:

  • 2019: $170 million FTC settlement over YouTube child data collection
  • 2020: $3.8 million New Mexico settlement for child data collection

AI Bias in Educational Tools
#

Proctoring Software and Discrimination
#

Independent research has documented systematic bias in AI proctoring systems:

Facial Recognition Failures:

  • A Black student at the University of Colorado Denver reported that Proctorio repeatedly couldn’t recognize her face
  • She was denied access to tests so often she had to arrange alternatives with her professor
  • Her white peers never experienced this problem

Disability Discrimination:

  • Proctoring systems flag “unusual” eye movements, physical movements, and vocalizations
  • Students with ADHD, autism, Tourette’s syndrome, and other conditions are flagged at higher rates
  • The systems normalize neurotypical behavior while penalizing natural variation

DOE Civil Rights Guidance (November 2024):

The Department of Education’s Office for Civil Rights specifically addressed these concerns:

  • Schools must evaluate AI tools for accuracy and potential bias before deployment
  • Proctoring systems must accommodate students with disabilities
  • Use of biased AI may violate civil rights laws

AI Detection Tool Bias
#

AI plagiarism and AI-use detectors have shown systematic problems:

False Positive Rates:

  • Studies show AI detectors produce significant false positives
  • Non-native English speakers are flagged at disproportionately higher rates
  • Neurodivergent students’ writing patterns may trigger false positives

The “Automation of Ableism”: Researchers have documented how AI detection tools classify original work by neurodivergent students as fraudulent, embedding assumptions about “normal” writing patterns that exclude many students.

The Emerging Standard of Care
#

For Educational Institutions
#

Based on litigation trends and regulatory guidance, institutions should:

  1. Evaluate AI Tools Before Deployment

    • Assess accuracy claims with independent testing
    • Investigate bias testing methodology
    • Ensure disability accommodations are possible
  2. Establish Clear Policies

    • Define permitted and prohibited AI uses before enforcement
    • Ensure policies are communicated to students
    • The Harris v. Hingham case warns against retroactive enforcement
  3. Implement Human Review

    • AI detection results should trigger investigation, not automatic discipline
    • Proctoring flags should be reviewed by humans before action
    • Students should have opportunity to respond before sanctions
  4. Accommodate Disabilities

    • Ensure alternative testing arrangements are available
    • Don’t rely on AI systems that can’t accommodate different abilities
    • Document accommodation processes
  5. Protect Student Privacy

    • Understand what data vendors collect and retain
    • Obtain appropriate parental consent where required
    • Comply with FERPA, COPPA, and state privacy laws

For Technology Vendors
#

Vendors face expanding liability exposure:

  1. Performance Guarantees

    • The Meazure Learning lawsuits show that failure to meet stated performance specifications creates liability
    • Don’t overpromise system capabilities
  2. Bias Testing and Disclosure

    • Test systems for differential impact by race, disability, and other protected categories
    • Disclose known limitations and error rates
  3. Data Minimization

    • EPIC’s complaint establishes that excessive data collection creates legal risk
    • Collect only what’s necessary for the stated purpose
  4. Transparency

    • Be clear about how AI systems make decisions
    • Provide meaningful information about accuracy and limitations

Practical Guidance
#

Before Adopting Educational AI Tools
#

  • Request vendor documentation on accuracy testing and bias evaluation
  • Verify systems can accommodate students with disabilities
  • Ensure parental consent mechanisms meet legal requirements
  • Establish clear written policies on permitted use

During Use
#

  • Monitor outcomes for differential impact by student demographics
  • Maintain incident reporting for technology failures
  • Document human review of AI-flagged cases
  • Preserve records of how decisions were made

If Problems Arise
#

  • Treat AI outputs as starting points for investigation, not conclusions
  • Provide students meaningful opportunity to respond before sanctions
  • Consider whether institutional policies clearly addressed the situation
  • Engage legal counsel early in potential liability situations

Resources
#

Related

AI in Education Standards: Assessment, Tutoring, and Responsible Use

As AI tutoring systems, chatbots, and assessment tools become ubiquitous in education, a new standard of care is emerging for their responsible deployment. From Khan Academy’s Khanmigo reaching millions of students to universities grappling with ChatGPT policies, institutions face critical questions: When does AI enhance learning, and when does it undermine it? What safeguards protect student privacy and prevent discrimination? And who bears liability when AI systems fail?

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.