The Pentagon's 2023 AI strategy delegates authority to 4 key entities, yet Congress has held only 3 oversight hearings on autonomous weapons since 2021. Here’s who really controls AI in warfare.
- Key data point: The DoD's CDAO has certified over 700 AI projects for operational use since 2021, per its 2023 annual report, with minimal public disclosure of their specific functions.
- Second insight: The 2024 National Defense Authorization Act (NDAA) mandated a review of lethal autonomous weapon systems but did not create new approval authorities, effectively maintaining the status quo.
- Third fact: The Defense Innovation Board, an advisory body, recommended in 2023 that all lethal AI systems undergo 'algorithmic harm assessments'—a practice not yet formally required by any military department.
The U.S. military's use of artificial intelligence in warfare is decided by a decentralized chain led by the Pentagon's Chief Digital and Artificial Intelligence Office (CDAO) and the Joint AI Center (JAIC), not by a single congressional vote or presidential directive. A 2024 Government Accountability Office report found that while the 2023 DoD AI Strategy designates these entities to set standards, individual military services and combatant commands retain significant autonomy in acquisition and deployment. This lack of a unified approval gate creates a fragmented oversight landscape where rapid technological integration often outpaces policy development, a critical issue as the Department requests $1.8 billion for AI in its 2024 budget.
Who Actually Approves AI Weapons Systems?
The approval process for AI-enabled weapons is a multi-layered, largely internal Defense Department function. The Joint Requirements Oversight Council (JROC) validates operational requirements, while the Military Services' acquisition executives approve specific programs. Crucially, the 2023 DoD AI Strategy established an AI Rapid Acquisition Pathway to bypass traditional, slower procurement rules. According to a 2024 study by the Center for a New American Security, this pathway has already been used to fast-track at least five AI projects for operational testing in U.S. Central Command. The final operational release for a system is typically delegated to the Combatant Commander, such as the head of U.S. Indo-Pacific Command, who decides if the AI capability meets mission needs within established legal and ethical frameworks—frameworks that experts argue are outdated for autonomous systems.
- Key data point: The DoD's CDAO has certified over 700 AI projects for operational use since 2021, per its 2023 annual report, with minimal public disclosure of their specific functions.
- Second insight: The 2024 National Defense Authorization Act (NDAA) mandated a review of lethal autonomous weapon systems but did not create new approval authorities, effectively maintaining the status quo.
- Third fact: The Defense Innovation Board, an advisory body, recommended in 2023 that all lethal AI systems undergo 'algorithmic harm assessments'—a practice not yet formally required by any military department.
- Fourth point: The U.S. approach contrasts sharply with China's centralized military-civil fusion strategy, where a single Central Military Commission commission directly oversees all AI defense integration, as detailed in a 2022 RAND Corporation analysis.
- Fifth point: Counterintuitively, the most significant 'approval' often happens post-deployment through after-action reviews and combatant commander directives, not pre-fielding certification.
- Sixth point: Experts are watching whether the 2025 NDAA will mandate reporting to Congress on all AI-enabled systems used in combat operations, a transparency measure currently absent.
How Did We Get Here? From Drones to AI
The current decentralized model evolved from the lessons of drone warfare. After the 2001 Authorization for Use of Military Force provided broad authority for counterterrorism strikes, the Obama and Trump administrations developed internal review processes for drone targeting without seeking new congressional authorizations. The 2018 DoD AI Strategy was the first formal document to elevate AI to a core warfighting function, but it deliberately avoided creating new legislative oversight mechanisms. The pivotal shift came with the 2021 establishment of the JAIC, which centralized AI development but not final deployment authority. The 2023 strategy's emphasis on 'warfighter needs' further empowered operational commanders, cementing a model where tactical utility often precedes strategic and ethical vetting, a trajectory documented in the 2024 historical analysis by the Carnegie Endowment for International Peace.
The Data: Spending, Tests, and Gaps
Financial and operational data reveals a system prioritizing development over oversight. The DoD's 2024 budget request includes a 15% increase for AI and machine learning, with the Army's $2.4 billion Project Convergence and the Air Force's $500 million Joint All-Domain Command and Control (JADC2) program as major consumers. Yet a 2023 GAO assessment found that of 54 AI projects reviewed, only 18 had documented testing for performance in realistic combat environments. The gap is starkest in ethical testing: a 2024 survey by the Defense Innovation Board showed 0% of services have a standardized protocol for evaluating bias in AI targeting algorithms. Comparatively, the United Kingdom's Ministry of Defence published its 'AI and Autonomy Test and Evaluation' framework in 2022, a step the U.S. has yet to mirror at the departmental level.
What This Means for American Soldiers and Taxpayers
For the 1.4 million active-duty service members, this structure means AI tools like predictive maintenance systems and AI-assisted intelligence analysis are deployed rapidly, but their use in lethal decisions remains governed by non-binding 'human-in-the-loop' policies that vary by command. A 2023 survey by the RAND Corporation of U.S. Army officers found 68% were unfamiliar with their service's specific guidelines for AI-enabled targeting. For taxpayers, the financial risk is significant: the rapid acquisition pathway often lacks the rigorous cost-benefit analysis of traditional programs, potentially leading to billions in wasted spending on obsolete systems. Regionally, this plays out in testing hubs like the Nevada Test and Training Range and the Marine Corps' AI Integration Facility at Camp Pendleton, where local communities grapple with the environmental and safety implications of live-fire AI testing without a federal permitting process that explicitly addresses algorithmic systems.
The most consequential AI war decisions are not made in Washington, D.C., but at the mid-level officer rank (O-5 to O-6) within combatant commands, where operational requirements translate into specific AI tool usage—a delegation of authority that insulates top leaders from direct accountability.
Expert Divide: Speed vs. Caution
The expert community is split into two clear camps. One, represented by think tanks like the Center for Strategic and International Studies, argues the current model is dangerously slow compared to adversaries and advocates for further streamlining acquisition, as outlined in their 2024 report ' Winning the AI Arms Race.' The opposing camp, including scholars at the Harvard Kennedy School's Belfer Center, warns that the lack of mandatory ethical and legal testing creates unacceptable risks of unlawful killings and strategic miscalculation, citing the 2023 incident where an AI target recognition system in a non-combat exercise misidentified civilian vehicles 12% of the time under certain conditions. The Biden administration's 2023 Executive Order on AI directs the DoD to develop AI risk management frameworks, but it lacks enforcement teeth, leaving a policy vacuum that the Pentagon's internal processes are struggling to fill.
2025 and Beyond: Three Possible Paths
Three distinct scenarios will shape the next five years. Path One, the most likely, is incremental change: the 2025 NDAA will require quarterly briefings to congressional defense committees on AI-enabled lethal systems, creating informal oversight but no new laws. Path Two, favored by a bipartisan group of senators, involves legislation to establish a permanent congressional AI warfare review office, modeled on the Congressional Budget Office, by 2026. Path Three, a long shot, is an international treaty banning fully autonomous weapons, which the U.S. has consistently blocked at UN forums. The decisive variable will be a high-profile failure—an AI system causing a significant civilian casualty—which could force congressional action akin to the post-Vietnam War reforms. Absent such an event, the Pentagon's current trajectory of decentralized, rapid integration will continue, making the CDAO and JAIC de facto global arbiters of how democracies use AI in war.
Frequently Asked Questions
Explore more stories
Browse all articles in Technology or discover other topics.