Skip to main content
  1. AI Standard of Care by Industry/

Military AI & Autonomous Weapons Standard of Care

Table of Contents

Military AI: The Ultimate Accountability Challenge
#

Lethal autonomous weapons systems (LAWS), weapons that can select and engage targets without human intervention, represent the most consequential liability frontier in artificial intelligence. Unlike AI errors in hiring or healthcare that cause individual harm, autonomous weapons failures can kill civilians, trigger international incidents, and constitute war crimes. The legal frameworks governing who bears responsibility when AI-enabled weapons cause unlawful harm remain dangerously underdeveloped.

As AI-powered military systems proliferate globally, the gap between technological capability and legal accountability widens. The international community is racing to establish binding rules before autonomous weapons reshape the conduct of warfare, and the nature of responsibility for it.

International Regulatory Framework
#

UN General Assembly Resolution 79/62 (December 2024)
#

On December 2, 2024, the United Nations General Assembly adopted a historic resolution on lethal autonomous weapons systems with overwhelming support: 166 votes in favor, 3 opposed (Belarus, North Korea, and Russia), and 15 abstentions.

Key Provisions:

  • Creates a new UN forum to address autonomous weapons challenges
  • Proposes a two-tiered approach: prohibit some LAWS while regulating others
  • Mandates “open informal consultations” in New York during 2025
  • Opens participation to member states, NGOs, the scientific community, and industry

Significance: This was the second-ever UN General Assembly resolution on “killer robots,” following the first in December 2023 (152-4-11). The growing vote margins reflect accelerating international concern about autonomous weapons.

Limitations: The resolution does not mandate treaty negotiations because the United States and a small number of states vigorously opposed binding commitments. The resolution is the “beginning of a process” rather than a definitive framework.

CCW Group of Governmental Experts Rolling Text
#

The Convention on Certain Conventional Weapons (CCW) has convened a Group of Governmental Experts (GGE) working toward a potential international instrument on LAWS.

The Rolling Text (May 2025) includes provisional consensus on:

  1. Characterization of LAWS: “An integrated combination of one or more weapons and technological components, that can select and engage a target, without intervention by a human user in the execution of these tasks”

  2. IHL Applicability: International humanitarian law applies to all autonomous systems

  3. Human Control Requirement: Human judgement and control is essential for lawful use

  4. Prohibitions: Inherently indiscriminate systems should be prohibited

  5. Technical Standards: LAWS must be “predictable, reliable, traceable, and explainable”

  6. Lifecycle Obligations: States bear duties across the entire LAWS lifecycle including legal reviews, testing, and bias mitigation

  7. Accountability: Humans remain responsible for all decisions related to LAWS

Timeline: Many delegations have called for completing the mandate by 2025, in line with UN Secretary-General calls. However, consensus remains elusive on the nature of any binding instrument.

Political Declaration on Responsible Military Use of AI (2023-2024)
#

The United States launched the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy at the February 2023 REAIM summit.

Core Principles:

  • Military AI use must comply with international law, particularly international humanitarian law
  • AI capabilities require accountability within a “responsible human chain of command and control”
  • States should minimize unintended bias and accidents
  • High-consequence applications require senior-level review
  • Systems must be capable of deactivation if they demonstrate unintended behavior

Endorsement Status:

  • 51+ countries have endorsed as of January 2024
  • All EU member states are participants
  • A third REAIM summit is scheduled for September 2025 in Spain

Legal Nature: The Declaration is non-binding. It “does not alter existing legal obligations of the endorsing States, nor does it add any new obligations under international law.”

U.S. Policy Framework
#

DoD Directive 3000.09: Autonomy in Weapon Systems
#

Department of Defense Directive 3000.09, first issued in 2012 and updated in January 2023, establishes U.S. policy on autonomous weapons.

Key Definitions:

LAWS are defined as “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator”, also known as “human out of the loop” or “full autonomy.”

Human Control Requirement:

“Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

The directive acknowledges that “appropriate” is context-dependent:

“‘Appropriate’ is a flexible term that reflects the fact that there is not a fixed, one-size-fits-all level of human judgment that should be applied to every context. What is ‘appropriate’ can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system.”

2023 Updates:

  • Replaced “human operator” with “operator” throughout
  • Added transparency, explainability, and auditability requirements
  • Aligned with research on autonomous device governance

Congressional Oversight:

  • Section 251 of the FY2024 NDAA requires the Secretary of Defense to notify Congress of any directive changes within 30 days
  • The FY2025 NDAA requires periodic reports on autonomous weapons development and deployment through 2029

DoD Ethical Principles for AI (2020)
#

In February 2020, the Department of Defense adopted five ethical principles for military AI development:

  1. Responsible: Personnel exercise appropriate judgment and remain responsible for AI development, deployment, and use

  2. Equitable: Deliberate steps minimize unintended bias

  3. Traceable: AI capabilities allow relevant personnel to understand the technology, with transparent and auditable methodologies, data sources, and documentation

  4. Reliable: AI has explicit, well-defined uses subject to testing and assurance across entire lifecycles

  5. Governable: AI is designed to detect and avoid unintended consequences, with ability to disengage or deactivate deployed systems

ICRC Position: Meaningful Human Control
#

The International Committee of the Red Cross (ICRC) has articulated the most influential framework for autonomous weapons governance.

Core Position:

“It is now widely accepted that human control must be maintained over weapon systems and the use of force, which means we need limits on autonomy.” , ICRC President Peter Maurer

The Fundamental Problem:

After activation, autonomous weapons self-initiate strikes based on sensor inputs and generalized “target profiles.” The user does not choose, or even know, the specific targets and precise timing of force application. This loss of human control raises serious humanitarian, legal, and ethical concerns.

Recommended Prohibitions:

  • Unpredictable autonomous weapons
  • Autonomous weapons designed or used to apply force against persons

Recommended Restrictions:

  • Limits on target types
  • Limits on duration, geographical scope, and scale of use
  • Situational limits (e.g., constraining use to areas where civilians are not present)
  • Requirements for effective human supervision and timely intervention capability

Joint UN-ICRC Call (October 2023):

The UN Secretary-General and ICRC President issued a joint appeal calling for new international rules:

“Human control must be retained in life and death decisions. The autonomous targeting of humans by machines is a moral line that must not be crossed.”

They called for treaty negotiations to conclude by the end of 2026.

The Liability Challenge: Who Is Responsible?
#

Autonomous weapons create what scholars call an “accountability gap”, situations where unlawful harm occurs but no clear entity bears legal responsibility.

Multiple Potential Actors
#

When LAWS cause civilian casualties or violate international humanitarian law, at least three actors face potential liability:

  1. Operators and Commanders: The personnel who launched the system and those who ordered its deployment

  2. Political/Military Leadership: Senior officials who authorized the weapon’s use and established rules of engagement

  3. Designers and Manufacturers: Companies and engineers who developed the AI systems and integrated them into weapons

Criminal Liability Limitations
#

A 2015 Human Rights Watch study identified fundamental barriers to criminal accountability:

For Commanders and Operators:

  • Cannot generally be assigned direct responsibility for wrongful actions of fully autonomous weapons
  • Accountable only in “rare cases when it can be shown that they actually possessed the specific intention and capability to commit criminal acts through the misuse of fully autonomous weapons”

For Programmers and Manufacturers:

  • “Unreasonable to impose criminal punishment on the programmer or manufacturer, who might not specifically intend, or even foresee, the robot’s commission of wrongful acts”
  • Software complexity makes proving specific intent nearly impossible

Civil Liability: The Government Contractor Defense
#

In the United States, the Government Contractor Defense established in Boyle v. United Technologies Corp., 487 U.S. 500 (1988), provides significant immunity for defense contractors.

Three Elements:

  1. The United States approved reasonably precise specifications for the product
  2. The product conformed to those specifications
  3. The supplier warned the United States about any known dangers not known to the government

Practical Effect:

Military contractors who design weapons to government specifications and disclose known risks are largely immune from product liability suits for design defects.

Additional Immunity Layers:

  • Military is immune from suits related to policy determinations (including weapon choice)
  • Combat activities immunity
  • Foreign country immunity under the FTCA
  • Yearsley immunity for contractors acting within valid Congressional authorization

The Gap:

These immunities mean that even when autonomous weapons cause unlawful civilian deaths, victims may have no viable path to civil recovery in U.S. courts.

International State Responsibility
#

States remain responsible under international law for violations committed through autonomous weapons:

  • States bear responsibility for conduct of armed forces under IHL
  • Diplomatic protests and arbitration for damages remain available
  • States using LAWS that cause international incidents face reputational consequences as “irresponsible users”

However, enforcement mechanisms remain weak, and no international court has adjudicated LAWS-related claims.

Real-World AI Targeting: The Gaza Case Study
#

The use of AI targeting systems in Gaza provides a stark illustration of the human control and accountability challenges.

Reported AI Systems
#

According to investigative reporting by +972 Magazine, the Israel Defense Forces deployed multiple AI targeting systems:

Lavender:

  • Assigns a 1-100 score to every Gazan estimating likelihood of Hamas affiliation
  • Marks individuals for kill lists, from commanders to foot soldiers
  • Reported 10% error rate in identifying Hamas affiliation
  • Sources reported IDF received “sweeping approval to automatically adopt its kill lists as if it were a human decision”

The Gospel:

  • Generates suggestions for buildings and structures allegedly used by militants
  • Includes estimates of civilian casualties from attacks on private residences

“Where’s Daddy?”:

  • Tracks phone movements to target individuals in their homes
  • Home presence treated as identity confirmation

Human Oversight Concerns
#

Targeting officers reportedly dedicated approximately 20 seconds to personally confirming targets, which could amount to verifying only that the individual was male.

“There was no ‘zero-error’ policy. Mistakes were treated statistically. Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know that statistically it’s fine.”

Scale:

  • In the first two months of conflict, Israel reportedly attacked roughly 25,000 targets, more than four times previous Gaza wars
  • One source stated: “Because of the system, the targets never end. You have another 36,000 waiting.”
  • For junior operatives marked by AI, civilian casualties of up to 20 per target were reportedly accepted

Implications
#

The Gaza case illustrates core concerns:

  1. Meaningful human control may exist in policy documents but collapse in operational practice
  2. Speed and scale enabled by AI can overwhelm human review capacity
  3. Statistical acceptance of errors represents a fundamental departure from IHL requirements for distinction and proportionality
  4. Accountability remains unclear when AI generates targets and humans approve them in seconds

As Human Rights Watch noted: “The use of AI to inform military targeting decisions has, in some cases, risked violating international humanitarian law concerning distinction between military targets and civilians.”

The Emerging Standard of Care
#

For States Deploying Autonomous Weapons
#

International legal consensus is coalescing around several requirements:

1. Meaningful Human Control

  • Humans must make context-specific judgements of distinction, proportionality, and precaution
  • Rubber-stamp approval of AI recommendations does not constitute meaningful control
  • Time pressure cannot justify abandoning human judgment

2. System Predictability and Reliability

  • LAWS must perform predictably within defined parameters
  • Testing must verify performance across realistic operational conditions
  • Systems must be capable of deactivation when unintended behavior occurs

3. Traceability and Explainability

  • States must understand how AI systems reach targeting recommendations
  • Audit trails must support post-incident accountability
  • “Black box” systems that cannot explain outputs may not satisfy IHL requirements

4. Legal Reviews

  • Article 36 of Additional Protocol I requires legal review of new weapons
  • Reviews must assess LAWS compliance with IHL principles
  • Ongoing monitoring required as AI systems evolve

For Defense Contractors
#

1. Specification Documentation

  • Maintain comprehensive records of government specifications
  • Document all design decisions and rationale
  • Preserve communications with government regarding capabilities and limitations

2. Risk Disclosure

  • Warn government customers of known dangers
  • Disclose limitations in AI system accuracy and reliability
  • Document error rates and failure modes from testing

3. Bias Mitigation

  • Test for unintended bias in target identification
  • Document training data composition and limitations
  • Implement DoD’s “equitable” AI principle

4. Human Interface Design

  • Design interfaces that support meaningful human control
  • Ensure operators can understand AI recommendations
  • Build in deactivation capabilities as required by DoD principles

For Victims and Advocates
#

The accountability gap means pursuing justice for autonomous weapons harms faces severe obstacles:

Potential Avenues:

  • International Criminal Court proceedings against commanders (where jurisdiction exists)
  • State-to-state claims through diplomatic channels
  • UN Human Rights Council investigations
  • Civil society documentation and advocacy

Current Limitations:

  • No international treaty specifically governs LAWS
  • Domestic immunities shield many potential defendants
  • Attribution of specific harms to AI decisions is technically difficult
  • Political barriers impede accountability for major military powers

Looking Forward
#

The international community faces a critical window to establish binding rules before autonomous weapons proliferate further.

Treaty Negotiations: The UN Secretary-General and ICRC have called for treaty completion by 2026. The 166-3 General Assembly vote demonstrates political will exists, but major military powers including the United States, Russia, and China have resisted binding commitments.

Technology Outpacing Policy: As one expert observed: “The pace of technology is far outstripping the pace of policy development.” Every year without binding rules sees more LAWS development and deployment.

Normative Development: Even without a treaty, the Political Declaration, CCW rolling text, and ICRC recommendations are establishing soft law standards that may influence state practice and eventually harden into binding customary international law.

Litigation Potential: As AI targeting systems see wider use and civilian casualties mount, pressure for accountability mechanisms will intensify. Novel legal theories may emerge to address the accountability gap.

The question is whether international law can adapt quickly enough to maintain meaningful human responsibility for decisions to take human life, or whether the age of autonomous killing will arrive with no clear rules and no clear accountability.

Resources
#

Related

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.