Skip to main content
  1. AI Standard of Care by Industry/

AI in Education Standards: Assessment, Tutoring, and Responsible Use

Table of Contents

As AI tutoring systems, chatbots, and assessment tools become ubiquitous in education, a new standard of care is emerging for their responsible deployment. From Khan Academy’s Khanmigo reaching millions of students to universities grappling with ChatGPT policies, institutions face critical questions: When does AI enhance learning, and when does it undermine it? What safeguards protect student privacy and prevent discrimination? And who bears liability when AI systems fail?

This page examines the emerging norms governing AI in educational assessment and tutoring, the frameworks being developed by international bodies, federal and state governments, universities, K-12 districts, and leading edtech platforms.


The Scale of AI in Education
#

AI Education by the Numbers
  • 1 in 3 students now use AI tools regularly for schoolwork
  • 170 million users on Google Workspace for Education globally
  • 260+ school districts have piloted Khan Academy’s Khanmigo AI tutor
  • 27 states have issued AI guidance for K-12 schools (as of August 2025)
  • 1 in 3 college applicants used AI to help write admissions essays in 2023-24

The rapid adoption of AI in education has outpaced policy development, creating a patchwork of institutional responses and significant uncertainty about acceptable standards.


International Frameworks: UNESCO and OECD
#

UNESCO Guidelines on AI in Education
#

UNESCO, as the UN’s specialized agency for education and custodian of the Recommendation on the Ethics of Artificial Intelligence (2021), has established foundational principles for AI in education:

Core Principles:

  • AI should advance well-defined educational objectives grounded in evidence
  • Systems must be responsive to educational needs and societal values
  • Diverse stakeholder input is required for effective AI policies
  • Collaboration among educators, AI developers, and policymakers is essential

UNESCO emphasizes that AI integration requires balancing innovation with ethics, technology should serve educational goals, not define them.

OECD AI Literacy Framework (2025)
#

In May 2025, the OECD and European Commission released the “Empowering Learners for the Age of AI: An AI Literacy Framework for Primary and Secondary Education”, a landmark document establishing global AI literacy standards for school-aged children.

Four Core Domains:

DomainFocus
Engage with AIUsing AI tools effectively and critically
Create with AIProducing content with AI assistance
Manage AIUnderstanding AI limitations and risks
Design with AIDeveloping AI solutions

The framework sets benchmarks for policy, curriculum, teaching, and assessment, preparing students not just to use AI, but to understand and shape it.

OECD Digital Education Outlook Findings
#

The OECD Digital Education Outlook 2023 surveyed 18 countries on generative AI governance in education:

Key Finding: Most national governments have published non-binding guidance rather than binding regulations. In the absence of central mandates, school-level decisions by teachers and administrators significantly influence how AI is integrated.

Emerging Best Practices:

  • Adapting curricula to incorporate AI literacy
  • Identifying and preventing AI bias
  • Protecting student data
  • Preventing AI-facilitated cheating

U.S. Federal Guidance
#

Department of Education AI Toolkit (October 2024)
#

The Biden Administration’s AI Toolkit for Safe, Ethical, and Equitable AI Integration was developed pursuant to Executive Order 14110 on AI. The guidance was informed by:

  • Public listening sessions with 90 educators (December 2023–March 2024)
  • 12 roundtable discussions with education leaders
  • Input from the Office for Civil Rights on discrimination concerns

Five Core Ethical Principles:

  1. Data Privacy & Security, Protect student information in AI systems
  2. Transparency & Accountability, Explain how AI decisions are made
  3. Bias Awareness & Mitigation, Test for and address discriminatory outcomes
  4. Human Oversight & Educator Judgment, AI assists but doesn’t replace teachers
  5. Academic Integrity, Clear policies on permitted AI use

Office for Civil Rights Guidance (November 2024)
#

The DOE’s Office for Civil Rights released “Avoiding the Discriminatory Use of Artificial Intelligence”, specifically addressing:

  • AI proctoring systems that disproportionately flag students of color
  • AI detection tools that produce false positives for non-native English speakers
  • Systems that fail to accommodate students with disabilities
  • Civil rights obligations when deploying AI in educational settings

Trump Administration Priorities (2025)
#

In July 2025, Secretary of Education Linda McMahon announced a proposed supplemental priority on Advancing AI in Education, responding to President Trump’s April 2025 Executive Order. The proposed priority encourages:

  • AI technologies to enhance classroom efficiency
  • Reducing administrative burdens through automation
  • Improving teacher training and evaluation with AI

Public comments were accepted through August 20, 2025.


State-Level AI Guidance
#

As of August 2025, 27 states have released AI guidance for K-12 public schools. Key patterns include:

Guidance AreaStates Addressing
FERPA/COPPA/CIPA compliance baseline~20 states
Avoiding PII input to AI systems~12 states
Data collection/retention practices~16 states
Data security requirements~21 states

Notable State Frameworks
#

Tennessee (Public Chapter 550, 2024):

  • First state to mandate AI policies for all K-12 districts and public charter schools
  • Tennessee School Boards Association developed a Model Policy in June 2024

Nevada (STELLAR Principles):

  • Security, Transparency, Empowerment, Learning, Leadership, Achievement, Responsible Use
  • Comprehensive framework for PreK-12 AI integration

Georgia (January 2025):

  • Framework focusing on ethical, effective, and secure AI use
  • Detailed implementation guidance for districts

University AI Policies: Academic Integrity
#

The Policy Spectrum
#

Universities have adopted varying approaches to AI use:

Prohibition Policies:

“All generative artificial intelligence tools are strictly prohibited in this class. Students turning in work violating this policy will be subject to all academic and disciplinary procedures associated with plagiarism and cheating.”

Permission-Based Policies:

“Students are allowed to use AI tools on assignments if instructor permission is obtained in advance. Unless given permission, each student is expected to complete each assignment without substantive assistance.”

Transparent Use Policies:

“Students are allowed to use AI tools on assignments if that use is properly documented and credited.”

Emerging Best Practices
#

Leading institutions are developing nuanced approaches:

Columbia University Generative AI Policy:

  • Researchers must avoid uploading unpublished data to AI tools
  • Clear guidance on when AI use is appropriate vs. prohibited
  • Citation requirements for AI-assisted work

Duke University:

  • Does not recommend AI detection software due to unreliability
  • No longer assigns numerical ratings to admissions essays (responding to AI use)
  • Focus on evaluating demonstrated skills rather than written artifacts

Citation Requirements
#

MLA Guidelines (2025 Revision):

  • Cite generative AI whenever you paraphrase, quote, or incorporate AI-generated content
  • AI tools should not be treated as authors
  • Use the template of core citation elements to accommodate AI sources

Student Data Privacy: FERPA and COPPA
#

FERPA Considerations
#

The Family Educational Rights and Privacy Act creates obligations when AI systems process student data:

Key Requirements:

  • Student education records must be protected when shared with AI vendors
  • De-identification requirements limit data usable in AI systems
  • Re-identification risk grows as more information is included in AI training
FERPA Risk
If you input personally identifiable information about students into AI tools like ChatGPT or Gemini, that data becomes part of the AI company’s dataset, a potential FERPA violation.

COPPA Compliance
#

The Children’s Online Privacy Protection Act applies when AI tools collect data from children under 13:

ChatGPT Terms of Service: Prohibits users under 13; teens require parental consent

Compliance Requirements:

  • Parental consent before data collection
  • Strict limits on data use
  • Delete data upon parental request

COPPA 2.0 (Pending):

  • Would expand protections to students under 17
  • Would ban targeted advertising to children and teens
  • Currently under consideration in Congress

State Privacy Laws
#

The Future of Privacy Forum notes there are over 128 state student privacy laws that schools must navigate, creating a complex compliance landscape for AI adoption.


AI Tutoring: Standards for Responsible Development
#

Khan Academy’s Khanmigo Framework
#

Khanmigo, Khan Academy’s AI tutor, has been piloted in 260+ school districts and provides a model for responsible AI tutoring:

Design Principles:

  • Based on the Ethical Framework for AI in Education from the Institute for Ethical AI in Education
  • AI should advance well-defined educational objectives
  • Systems must benefit learners, not just institutions

Pedagogical Approach:

  • Socratic method: Guides students to answers rather than providing them directly
  • Step-level feedback: Immediate, specific guidance on learning progress
  • Standards-aligned: Connected to curriculum frameworks

Safety Guardrails:

  • Children under 18 require parental consent or school district partnership
  • Mechanisms prevent non-educational uses
  • Human supervision and support required

Technical Improvements:

  • Built-in calculator for numerical problems (avoiding AI math errors)
  • Custom benchmark dataset for evaluating tutoring quality
  • Continuous model evaluation for math tutoring accuracy

Recognition: Common Sense Media rated Khanmigo 4 stars, above ChatGPT and Bard, for educational appropriateness.

Best Practices for AI Tutoring Systems
#

Based on leading implementations, responsible AI tutoring should:

  1. Pedagogical Foundation

    • Grounded in learning science research
    • Aligned to educational standards
    • Designed to develop understanding, not just provide answers
  2. Privacy by Design

    • Minimize data collection to what’s educationally necessary
    • Clear data retention and deletion policies
    • FERPA/COPPA compliance verification
  3. Bias Mitigation

    • Testing across demographic groups
    • Monitoring for differential outcomes
    • Accommodation for diverse learning styles
  4. Human Oversight

    • Teacher dashboard visibility into AI interactions
    • Escalation pathways for concerning content
    • Regular human review of AI outputs
  5. Transparency

    • Clear disclosure of AI capabilities and limitations
    • Explainable feedback (not just “correct/incorrect”)
    • Parent/guardian visibility into student AI use

AI in Assessment: Testing Bodies
#

Standardized Testing Evolution
#

PISA 2025: The Program for International Student Assessment will include AI-powered chatbots that students can use to complete performance tasks, a major experiment in AI-assisted assessment.

College Board (SAT):

  • Returned to digital testing format
  • Developing AI-enhanced item generation
  • Focus on skills difficult for AI to replicate

AI’s Greatest Potential in assessment:

  • Generating test items more efficiently
  • Automated scoring with faster turnaround
  • Actionable, personalized feedback
  • Gauging creativity and problem-solving through natural language processing

AI Detection: Known Limitations
#

Detection Tool Concerns
  • Duke, OpenAI, and other institutions have stopped recommending AI detection software due to unreliability
  • Research shows bias against non-native English speakers
  • OpenAI withdrew its own detection tool due to inaccuracy
  • False positives can devastate students’ academic careers

The Rignol v. Yale lawsuit illustrates the risks: GPTZero flagged 30-year-old academic papers as “100% AI-generated”, obviously false results that nevertheless informed disciplinary decisions.


The Emerging Standard of Care
#

For Educational Institutions
#

Based on federal guidance, international frameworks, and litigation trends, institutions should:

Before Adoption:

  1. Evaluate AI tools against educational objectives
  2. Verify FERPA/COPPA/state privacy compliance
  3. Test for bias across demographic groups
  4. Ensure disability accommodation capabilities
  5. Establish clear, written policies before enforcement

During Use: 6. Maintain human oversight of AI outputs 7. Monitor outcomes for differential impact 8. Provide transparency to students and parents 9. Document AI use and decision-making processes

For Academic Integrity: 10. Create clear, advance policies on permitted AI use 11. Avoid reliance on unreliable detection tools 12. Treat AI flags as starting points for investigation, not conclusions 13. Provide due process before sanctions

For Edtech Vendors
#

Vendors developing AI educational tools should:

  1. Design for Learning, AI should teach, not just answer
  2. Minimize Data, Collect only what’s educationally necessary
  3. Test for Bias, Verify equitable outcomes across demographics
  4. Enable Oversight, Provide teacher/parent visibility
  5. Ensure Compliance, Meet FERPA/COPPA/state requirements
  6. Be Transparent, Disclose limitations and error rates

Frequently Asked Questions
#

Can schools prohibit students from using ChatGPT?

Yes, schools can establish policies prohibiting or limiting AI use. However, the Harris v. Hingham case suggests schools cannot retroactively punish AI use when no clear policy existed. Best practice is to establish explicit, written policies before enforcement, communicate them to students, and update them as technology evolves.

Are AI detection tools reliable for identifying student cheating?

No. Leading institutions including Duke University and OpenAI itself have stopped recommending AI detection tools due to high false positive rates. Research shows these tools are particularly unreliable for non-native English speakers and neurodivergent students. Detection flags should trigger investigation, not automatic discipline.

What federal laws govern AI in K-12 education?

The primary federal laws are FERPA (student record privacy), COPPA (online privacy for children under 13), CIPA (internet filtering), and civil rights laws enforced by the Department of Education’s Office for Civil Rights. State laws add additional requirements, the Future of Privacy Forum counts over 128 state student privacy laws.

How should teachers cite AI-generated content in academic work?

MLA’s 2025 guidance recommends citing generative AI whenever you paraphrase, quote, or incorporate AI-generated content. AI tools should not be treated as authors. Instead, use citation templates that identify the tool used, the prompt given, and the date of generation. Check your institution’s specific requirements.

Is there a legal duty to use AI tutoring tools?

Not yet established, but emerging guidance suggests technology competence is part of educational duty of care. The NYSBA Task Force provocatively suggested that “a refusal to use technology that makes legal work more accurate and efficient may be considered a refusal to provide competent legal representation.” Similar logic may eventually apply to educators who refuse beneficial AI tools.

Resources
#

Related Pages:


Questions About AI Standards in Education?

As AI transforms teaching, learning, and assessment, understanding emerging standards of care is essential for institutions, vendors, and educators. From privacy compliance to bias prevention to academic integrity, the rules are rapidly evolving.

Consult an Education Technology Attorney

Related

Education AI Standard of Care

AI in Education: An Emerging Liability Crisis # Educational institutions face a rapidly expanding wave of AI-related litigation. From proctoring software that disproportionately flags students of color, to AI detection tools that falsely accuse students of cheating, to massive data collection on minors, schools, testing companies, and technology vendors now confront significant liability exposure. The stakes extend beyond financial damages: these cases implicate fundamental questions of educational access, disability accommodation, and civil rights.

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.