Skip to main content
  1. AI Standard of Care by Industry/

Aviation AI Safety & Air Traffic Control Liability

Table of Contents

Aviation AI: Where “Near Perfect Performance” Meets Unprecedented Risk
#

Aviation demands what a 50-year industry veteran called “near perfect performance.” The consequences of failure, hundreds of lives lost in seconds, make aviation AI liability fundamentally different from any other industry. As AI systems increasingly control aircraft, manage air traffic, and make split-second decisions that “humans may not fully understand or control,” the legal frameworks developed for human-piloted aviation are straining under the weight of technological change.

The stakes are clear: the Boeing 737 MAX MCAS failures killed 346 people. The FAA’s chronic controller shortage affects 45,000+ daily flights. And the rush to deploy AI in safety-critical aviation systems is accelerating faster than regulation can follow.

The Air Traffic Controller Crisis: Setting the Stage for AI
#

Staffing Below Critical Levels
#

The Federal Aviation Administration faces what the National Air Traffic Controllers Association (NATCA) calls a historic staffing crisis. According to NATCA testimony and FAA data:

  • The FAA is approximately 3,800 controllers short of the 14,633 needed to adequately staff facilities
  • Only 10,730 Certified Professional Controllers are currently on board
  • Only 23 of 289 terminal facilities meet or exceed staffing standards
  • The controller workforce declined by 13% between 2010 and 2024, nearly 2,000 employees

The Human Cost
#

The shortage creates dangerous working conditions:

  • 41%+ of controllers work 10-hour days, six days a week
  • Mandatory overtime at 40% of facilities requires 6-day workweeks at least monthly
  • Controller morale is at “historic lows” according to NATCA
  • Training takes 18 months to 4 years depending on facility complexity

The 2025 Shutdown Impact
#

During the November 2025 federal shutdown, the situation deteriorated further:

  • Controller retirements quadrupled, from 4 per day to 15-20 per day
  • The FAA has 400 fewer controllers than during the 2019 shutdown
  • Essential employees worked without pay while maintaining safety operations

This crisis creates powerful incentives to deploy AI assistance in air traffic management, and equally powerful liability questions when AI makes errors that human controllers might have caught.

The Boeing 737 MAX Precedent: Automation That Killed
#

The MCAS Failures
#

The Boeing 737 MAX crashes of 2018-2019 remain the defining precedent for aviation AI liability. The Maneuvering Characteristics Augmentation System (MCAS), an automated flight control system:caused both disasters:

Lion Air Flight 610 (October 29, 2018): 189 killed Ethiopian Airlines Flight 302 (March 10, 2019): 157 killed

The MCAS system relied on data from a single angle-of-attack sensor. When erroneous readings occurred, the system repeatedly forced the aircraft into dives that pilots could not overcome. Critically, Boeing did not inform pilots that MCAS existed, none of the aircraft documentation explained the system.

“Automation Surprise”
#

The crashes exemplified a known phenomenon: “automation surprise,” where automated systems take control in ways pilots cannot anticipate or counteract. Experts had warned of these dangers for over a decade, yet Boeing chose not to ensure MAX pilots understood that flight controls could “literally be taken out of their hands by an automated system.”

The Legal Reckoning (2024-2025)#

Criminal Settlement (May 2025): The U.S. Department of Justice announced a final settlement with Boeing to avoid criminal prosecution. Boeing agreed to pay over $1.1 billion, including $445 million specifically for crash victims’ families. The Justice Department dismissed fraud charges in exchange for financial commitments and compliance measures. Boeing had previously entered a deferred prosecution agreement in 2021 that was reopened after a January 2024 Alaska Airlines MAX 9 door plug incident.

Civil Settlements (November 2024): Boeing reached last-minute settlements with victims’ families to avert a federal civil trial scheduled for November 2024 in Chicago. According to Boeing, more than 90% of civil complaints have been resolved, though some families continue pursuing public trials seeking greater accountability.

Shareholder Settlement: Boeing reached a $225 million settlement with shareholders over negligence claims related to board oversight of the MAX program.

Prior Penalties:

  • $2.5 billion total under the original 2021 deferred prosecution agreement
  • $243.6 million criminal penalty
  • $1.77 billion to airline customers
  • $500 million crash victim fund
  • $200 million SEC settlement for misleading investors

Boeing’s court filing stated: “The defendant, Boeing, has admitted that it produced an airplane that had an unsafe condition that was a proximate cause of Plaintiff’s compensatory damages.”

The Liability Framework
#

The MAX cases established critical precedents:

  1. Manufacturers bear responsibility for automation that overrides pilot control
  2. Failure to disclose automated system capabilities creates liability
  3. “Automation surprise” is a foreseeable and preventable harm
  4. Single-sensor reliance in safety-critical systems is a design defect

FAA Roadmap for AI Safety Assurance (2024)
#

The Foundational Document
#

In August 2024, the FAA released its Roadmap for Artificial Intelligence Safety Assurance, the first comprehensive U.S. framework for AI in aviation. The 31-page document, developed through two years of industry stakeholder meetings, establishes guiding principles for AI safety assurance in aircraft and aircraft operations.

Seven Guiding Principles
#

The FAA established core principles for AI safety assurance:

  1. Work Within the Aviation Ecosystem: AI must integrate into established safety frameworks and processes
  2. Take an Incremental Approach: Gradual AI integration allows safety methods to evolve with real-world experience
  3. Leverage Industry Consensus Standards: Global harmonization requires industry-wide standard adoption
  4. Ensure Workforce Readiness: FAA hired a Chief Scientific and Technical Advisor for AI/ML and certification specialists
  5. Assure AI Safety: Develop specific methods for AI system certification
  6. Leverage AI for Safety: Use AI to enhance, not replace, human safety capabilities
  7. Support Aviation Safety Research: Continuous research into AI-specific risks

Learned vs. Learning AI
#

The FAA makes a critical distinction:

  • Learned AI: Static systems that operate within fixed parameters
  • Learning AI: Systems that continuously adapt and incorporate new reasoning

The roadmap warns: “Learning AI implementations may adapt in a manner that degrades performance, ultimately weakening their original safety profile.” This creates particular liability exposure when learning systems degrade after deployment.

A Living Document
#

The FAA acknowledges the roadmap is “a point-in-time snapshot of a fast-paced and evolving technology.” Updates will be “implemented based on experience, standards development, and research.” This creates uncertainty for operators seeking regulatory safe harbors.

UK Project Bluebird: The First AI Air Traffic Control
#

The World’s First Trial
#

Project Bluebird, a £13.7 million EPSRC Prosperity Partnership between NATS (UK air navigation services), the University of Exeter, and The Alan Turing Institute, aims to deliver the world’s first AI system to control airspace in live shadow trials.

Technical Approach
#

Over 40 experts from NATS, The Alan Turing Institute, and the University of Exeter are developing:

  • A probabilistic Digital Twin of UK airspace
  • AI agents trained on real-world air traffic scenarios
  • High-fidelity trajectory prediction methods accounting for weather and aircraft performance uncertainties

Current Status
#

The project is three years into a five-year timeline:

  • The Digital Twin can replay historical scenarios or generate artificial ones
  • AI agents controlled historic traffic over the London Middle sector (20,000-30,000 feet)
  • 2026 shadow mode trials will test AI agents on real-time traffic data without actual authority

Critical Safety Principle
#

The AI agents will be tested “without any authority to make real-world decisions”, allowing direct comparison with human controller decision-making. This shadow-mode approach reflects the aviation industry’s extreme caution about AI autonomy.

The Trust Challenge
#

A key objective is providing “insight into and explanation of the decision making of AI agents.” Trust between AI agents and Air Traffic Control Officers (ATCOs), and with the traveling public, requires transparency that current AI systems struggle to provide.

EASA NPA 2025-07: Europe’s AI Trustworthiness Framework
#

First Regulatory Proposal
#

On November 10, 2025, the European Union Aviation Safety Agency (EASA) published NPA 2025-07, the first regulatory proposal for AI trustworthiness in aviation. Comments close February 10, 2026.

EU AI Act Alignment
#

The proposal implements EU AI Act (Regulation 2024/1689) requirements for high-risk AI systems in aviation. It establishes:

  • Technical specifications (DS.AI) for AI trustworthiness
  • Acceptable means of compliance for demonstrating safety
  • Coverage of data-driven AI systems including supervised and unsupervised machine learning

Scope and Levels
#

The framework addresses:

  • Level 1 AI: AI-based assistance (decision support)
  • Level 2 AI: Human-AI teaming (shared control)

Future extensions will cover reinforcement learning, knowledge-based systems, hybrid approaches, and generative models.

Regulatory Trajectory
#

This is the first step of Rulemaking Task (RMT) 0742. A second NPA in 2026 will deploy the generic framework to specific aviation domains including:

  • Aircraft certification
  • Air traffic management
  • Drone operations

Autonomous Aircraft: The Liability Frontier
#

The Single Pilot Debate
#

Airbus and other manufacturers are pushing for single-pilot operations with AI co-pilot systems. The economic incentives are compelling: the International Air Transport Association (IATA) projects that single-pilot or autonomous operations could save billions annually through reduced crew costs and AI-optimized fuel efficiency.

EASA Pause (June 2025): In a significant development, EASA announced it has paused research into single-pilot flight operations, finding that current cockpit technology cannot match the safety standards of two-pilot operations. This decision halts regulatory development for:

  • eMCO (Extended Minimum Crew Operations): Single pilot during cruise with AI assistance
  • SiPO (Single Pilot Operations): Full single-pilot operation

Single Pilot Operations regulatory consideration by EASA and ICAO is now scheduled for 2027-2030.

Current Reality:

  • Cargo drone operations remain the nearest-term application
  • Single-pilot airline operations unlikely before 2030 at earliest
  • Fully autonomous passenger flight remains decades away

Pilot Association Opposition
#

Major pilot associations strongly oppose reduced crew operations:

  • ALPA White Paper (September 2024): The Air Line Pilots Association published analysis arguing that investing in reduced-crew operations would displace other safety investments and that two pilots with “uniquely human ability to adapt to unexpected circumstances remain irreplaceable”
  • BALPA (British) aligns with IFALPA and ECA positions in opposition
  • IFALPA (International) opposes single-pilot commercial operations
  • MIT Research (2024): MIT’s Aeronautics Department highlighted AI limitations:AI “falters in the face of the unpredictable, like an engine failure during a storm or an emergency landing in a restricted zone”

Public Trust Gap
#

Survey data reveals significant public resistance:

  • 73% of US adults would never feel comfortable flying without two pilots
  • 80% say remotely operated planes would make them feel less safe
  • 83% of Australians would be more hesitant to book with only one pilot
  • 76% of adults in 15 surveyed countries expressed discomfort with pilotless planes

The Liability Question
#

As legal scholars note, current international frameworks (including the Montreal Convention 1999) were developed for human-operated aviation. They do not adequately resolve:

  • Assignment of liability in machine-led decision-making
  • Compensation mechanisms for passengers and third parties
  • Coordination of responsibility between operators, manufacturers, and AI systems
  • Legal definitions of “carrier,” “accident,” and “operational control” when no pilot is onboard

Advanced Air Mobility: eVTOL Certification (2025)
#

The New Aircraft Category
#

The FAA issued final certification guidance for powered-lift aircraft (including eVTOL “air taxis”) in July 2025, the first comprehensive framework for a new civil aircraft category since helicopters were introduced 80 years ago.

Key Requirements:

  • Maximum certified takeoff weight: 12,500 pounds
  • Passenger capacity: Six or fewer occupants
  • Battery-electric engine systems required

The October 2024 final rule outlined pilot and instructor certification requirements as well as operational rules for powered-lift aircraft.

Global Regulatory Coordination
#

The Roadmap for Advanced Air Mobility Aircraft Type Certification represents collaboration among the FAA, UK CAA, Transport Canada, Australian CASA, and New Zealand CAA.

Incremental Approach: The roadmap explicitly adopts a “crawl, walk, run approach”, building first on piloted AAM, then remotely piloted systems, with increasing autonomy levels over time.

Industry Progress
#

  • Joby Aviation: Received Part 141 certificate for flight academy and FAA acceptance for Safety Management System
  • Eve Air Mobility: Achieved significant certification milestone with Brazil’s ANAC publishing final airworthiness criteria
  • Archer Aviation: Received FAA certification for pilot training academy

While some aircraft may be certified as early as 2025, the FAA envisions integrated AAM operations at key locations by 2028.

AI Liability in AAM
#

Advanced air mobility aircraft will rely heavily on AI for:

  • Flight control and stability (inherently unstable configurations)
  • Collision avoidance in urban airspace
  • Autonomous or semi-autonomous flight modes
  • Integration with urban air traffic management

The liability framework remains undeveloped. When an AI-controlled air taxi causes injury in an urban environment, multi-party liability analysis becomes extraordinarily complex, implicating manufacturers, AI developers, operators, infrastructure providers, and potentially air traffic management systems.

BVLOS Drone Operations: August 2025 Proposed Rule
#

Part 108 Framework
#

On August 5, 2025, following Executive Order 14307 (“Unleashing American Drone Dominance”), the FAA proposed rules enabling routine Beyond Visual Line of Sight (BVLOS) operations:

Key Provisions:

  • BVLOS operations up to 400 feet above ground level
  • Unmanned aircraft weighing up to 1,320 pounds including payload
  • Commercial operations including package delivery, agriculture, surveying
  • Final rule mandated by February 2026

Autonomous Operations Framework
#

The FAA envisions Part 108 as “primarily autonomous flights”:

  • A supervisor (not traditional pilot) oversees operations
  • The supervisor’s job is “to make sure the automation is doing its job”
  • If automation fails, supervisors have “very little control over the aircraft”
  • Heavy reliance on autonomous detect-and-avoid technology

Certification Approach
#

Rather than traditional aircraft certification (which has produced only 3 certified drone models), manufacturers would adhere to industry consensus standards. This addresses complaints that traditional certification is “overly time-consuming” for drone models that “may be obsolete before the type certification process is complete.”

Automated Data Service Providers
#

BVLOS operators must use FAA-approved Automated Data Service Providers (ADSPs) for Unmanned Traffic Management (UTM) systems. TSA maintains security threat assessment and cybersecurity mandates.

Criminal Drone Enforcement (2024-2025)
#

Recent prosecutions demonstrate the stakes of drone airspace violations:

Firefighting Interference (January 2025): An individual operating a drone near the Palisades wildfire collided with a firefighting aircraft, leaving a “football-sized” hole in the wing and grounding the aircraft for several days during active firefighting operations.

Wildfire Response Interference (September 2024): Sean Kusterer was sentenced to one month incarceration for operating a drone that recklessly interfered with law enforcement and emergency response efforts related to wildfire suppression.

NFL Game Airspace Violation (March 2024): Matthew Hebert received 12 months probation and a $500 fine for unlawfully operating a drone during the AFC Championship Game, temporarily suspending play.

MLB National Defense Airspace (May 2024): Criminal complaint charged Jason Carvell Banner for operating a drone in restricted airspace surrounding a Major League Baseball game at Globe Life Field.

The Regulatory “Minefield”
#

The FAA grants exemptions and waivers on a case-by-case basis with standards varying drastically between companies. One legal analysis characterized the situation as creating “an unpredictable regulatory minefield.” As drones become more autonomous, the traditional negligence framework, which assumes human control, becomes increasingly inadequate.

Aviation AI Cybersecurity: A 131% Surge in Attacks
#

The Rising Threat
#

Aviation cybersecurity in 2025 has seen a 131% surge in cyberattacks against aviation infrastructure between 2022 and 2023, with a separate analysis finding attacks increased 600% over recent years.

Notable 2024 Incidents:

  • Seattle-Tacoma International Airport (September 2024): Rhysida ransomware group disrupted critical systems, demanding $6 million in Bitcoin; terminal message boards offline for over a week
  • Japan Airlines (December 2024): Attack disrupted luggage services and delayed flights during New Year holiday season
  • Between January 2024 and April 2025, 27 attacks were identified from 22 different ransomware groups

Investment in the global aviation cybersecurity market is projected to grow from $4.6 billion (2023) to $8.42 billion by 2033.

Critical Vulnerability Categories
#

AI Decision Systems:

  • Flight Management Systems (FMS) could be manipulated to reroute aircraft
  • Centralized AI air traffic control breaches could affect thousands of aircraft
  • A breach in AI systems could “disrupt regional or even global air travel”

Sensor Manipulation:

  • GPS spoofing has been used in military operations to redirect UAVs into enemy zones
  • Adversarial image perturbations can deceive UAV surveillance systems
  • SATCOM systems are vulnerable to spoofing and jamming

Attack Vectors:

  • Unauthorized access to flight systems
  • GPS spoofing and jamming
  • Adversarial AI manipulation
  • UAV hijacking
  • Data poisoning of training systems

Demonstrated Vulnerabilities
#

Real-world demonstrations have proven the risks:

  • 2015: Researchers remotely hacked a Jeep Cherokee, gaining control over critical functions
  • 2016: University of South Carolina researchers deceived a Tesla Model S autopilot by projecting fake lane markings

These vehicle-based demonstrations translate directly to aviation systems using similar sensor and AI architectures.

FAA Research Under Threat
#

Despite growing risks, the FAA’s Aviation Safety Group is moving to cut AI-powered cybersecurity research programs exploring whether AI and machine learning can detect cyber intrusions in real-time, even though funding was allocated for these efforts.

International Legal Framework Gaps#

Montreal Convention Limitations
#

The Montreal Convention 1999 governs air carrier liability but was designed for human-operated aviation. Legal analysis identifies critical gaps:

  • The Convention does not define “aircraft”, though unmanned aircraft are generally considered a subcategory
  • Terms like “carrier,” “accident,” and “operational control” don’t map to autonomous scenarios
  • Responsibility shared across software developers, manufacturers, and operators lacks clear allocation

ICAO’s Limited Framework
#

ICAO distinguishes between autonomous aircraft and remotely-piloted aircraft (RPA), and anticipates that “only RPA will be able to integrate into the international civil aviation system in the foreseeable future.” This regulatory pessimism reflects the profound legal challenges of fully autonomous flight.

A working paper proposes that ICAO initiate focused legal work toward a harmonized international instrument, whether a new convention or supplementary protocol, to define liability and compensation mechanisms for autonomous aviation.

Third-Party Liability Void
#

The Rome Convention on third-party liability lacks sufficient ratifications across Europe, meaning no harmonized rules exist for damage caused by autonomous aircraft to people or property on the ground.

Predictive Maintenance AI: Emerging Liability
#

The Promise
#

AI-powered predictive maintenance represents one of aviation’s most promising AI applications. Delta TechOps’ APEX (Advanced Predictive Engine) program won the 2024 Grand Laureate Award from Aviation Week Network for advancing airline MRO capabilities through real-time engine data analysis.

The Risk
#

Research published in Discover Artificial Intelligence identifies critical concerns:

False Negatives:

  • AI may fail to detect critical issues, leading to in-flight mechanical failures
  • Safety consequences of missed detections can be catastrophic

Model Drift:

  • AI performance degrades over time without proper maintenance
  • Inaccurate predictions cause both safety issues and operational inefficiencies

Data Quality:

  • Gartner predicts 60% of AI projects will be abandoned by 2026 due to inaccurate data
  • McKinsey reports 70% of AI projects fail due to data quality issues
  • IDC concludes 85% of AI projects fail due to messy, incomplete, or bad data

Regulatory Response
#

In January 2025, the Department of Transportation fined JetBlue $2 million for chronic delays and “unrealistic scheduling”, the first time a U.S. airline was penalized specifically for operational delays. As AI increasingly drives scheduling and maintenance decisions, similar liability exposure will follow.

The Emerging Standard of Care
#

For Airlines and Operators
#

AI System Selection:

  • Evaluate AI systems against FAA Roadmap principles
  • Require vendors to document learned vs. learning AI characteristics
  • Assess whether AI creates new failure modes not addressed by current training

Human Oversight:

  • Maintain human authority over safety-critical decisions
  • Train crew on AI system limitations and failure modes
  • Establish clear protocols for AI system disagreements

Disclosure:

  • Inform passengers when AI systems are engaged in flight operations
  • Document AI involvement in incidents for regulatory reporting
  • Preserve AI decision logs for potential litigation

Cybersecurity:

  • Implement aviation-specific AI security frameworks
  • Monitor for adversarial attacks on AI decision systems
  • Maintain incident response plans for AI system compromise

For AI System Developers
#

Design Standards:

  • Follow FAA and EASA guidance on AI trustworthiness
  • Implement redundancy for safety-critical AI functions
  • Avoid single-sensor reliance (the MCAS lesson)

Transparency:

  • Document AI decision-making processes for certification
  • Provide explainability for safety-critical recommendations
  • Disclose known limitations and failure modes

Testing:

  • Conduct adversarial testing before deployment
  • Test for sensor spoofing and data poisoning vulnerabilities
  • Validate performance under degraded conditions

Updates:

  • Establish clear protocols for AI system updates
  • Monitor learning AI systems for performance degradation
  • Document model drift and mitigation strategies

For Regulators
#

Certification:

  • Develop AI-specific certification pathways
  • Require explainability for safety-critical AI decisions
  • Establish ongoing monitoring requirements for learning AI

Enforcement:

  • Monitor AI system performance through mandatory reporting
  • Investigate AI-involved incidents with technical expertise
  • Develop penalties appropriate to AI-specific violations

International Coordination:

  • Harmonize AI aviation standards across jurisdictions
  • Address Montreal Convention gaps for autonomous operations
  • Develop third-party liability frameworks for AI-caused harm

For Attorneys
#

Plaintiffs’ Counsel:

  • Preserve all AI system logs and training data immediately
  • Engage technical experts in AI system analysis
  • Investigate whether AI limitations were disclosed to crew
  • Consider manufacturer, operator, and AI vendor liability

Defense Counsel:

  • Document human oversight and intervention opportunities
  • Establish regulatory compliance with FAA/EASA frameworks
  • Preserve evidence of adequate AI system warnings and training
  • Address foreseeability of specific AI failure modes

Looking Forward
#

Key Unresolved Questions
#

  1. When AI makes a split-second decision that causes a crash, who is liable? The manufacturer who designed it? The airline that deployed it? The software developer who trained it? The data provider whose information it relied on?

  2. Does the FAA Roadmap create a regulatory safe harbor? Can operators who follow its principles defend against negligence claims?

  3. How do courts handle AI decisions “humans may not fully understand or control”? Does the black-box nature of AI create strict liability?

  4. What disclosure obligations exist for AI limitations not discovered until after deployment? Does the learning AI distinction create ongoing duties?

  5. Can manufacturers disclaim AI liability through terms of service? Are such limitations enforceable in aviation contexts?

The Shadow of 346 Deaths
#

The Boeing 737 MAX disasters demonstrated what happens when automation fails in aviation. As AI systems become more capable and more autonomous, the potential for similar catastrophes, and the liability that follows, will only grow. Aviation’s demand for “near perfect performance” creates both the imperative for AI assistance and the framework for holding AI systems accountable when they fall short.

Resources
#

Related

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.