In this episode of NPR's Book of the Day, journalist Katrina Manson discusses Project Maven and the Pentagon's push to integrate artificial intelligence into military operations. Manson traces the origins of this 2017 initiative, explaining how prolonged conflicts and competition with China drove military leaders to pursue AI as a solution to human limitations in warfare. She describes the technical challenges the military faced, from algorithms that misidentified targets to the gradual breakthroughs that enabled faster detection and strike decisions.
The conversation also examines how AI targeting systems are currently being used in operations across Ukraine, the Middle East, and other regions. Manson addresses the limitations and risks of military AI, including reliability concerns, algorithmic bias, and the potential for automated systems to encourage escalation rather than restraint. The episode explores the Pentagon's human-in-the-loop approach and the ongoing challenge of balancing technological advancement with responsible oversight and ethical standards.

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
By 2017, the United States was mired in prolonged conflicts in Afghanistan, Iraq, and against ISIS—wars that showed no sign of ending. These "forever wars" exposed the military's desperate need for modernization. Senior defense leaders grew increasingly concerned about competition with China, recognizing that commercial industries were rapidly advancing in AI while the Pentagon lagged behind. There was mounting pressure to close this gap and gain a technological edge.
Key Pentagon officials like Marine Corps Colonel Drew Cukor identified human limitations—inefficiency, fatigue, and vulnerability—as the central problem in warfare. They believed combining human effort with machine capabilities would revolutionize combat effectiveness. The goal was to achieve autonomy by removing humans from the frontline and deploying overwhelming U.S. military power through machines. This AI push echoed earlier transformative efforts like the race for nuclear weapons during the Cold War, with military leadership aiming to establish the U.S. as a leader in military AI before adversaries could catch up.
Early AI integration faced significant setbacks. Manson explains that initial algorithms were trained on civilian objects like wedding cake decorations and bridal veils, which created serious mismatches when repurposed for military use. The systems frequently misclassified targets, mistaking trees for people, rocks for buildings, and even clouds for school buses. These errors caused operators to simply stop using the AI, with Colonel Cukor describing it as worthless in their eyes.
Progress emerged gradually through algorithmic advances. AI began detecting hidden persons faster than humans could and identified a farmer in a targeted field, enabling soldiers to cancel a strike instantaneously—a distinction that took humans 40 seconds. Another breakthrough came when AI successfully distinguished friendly Marines amid battle chaos, quickly confirming their safety and directing firepower appropriately.
Colonel Cukor and military leadership recognized that technology alone was insufficient. They invested in supporting infrastructure, digital interfaces, and dedicated training to develop operator muscle memory and familiarity with AI. The Pentagon increasingly viewed technology adoption as a cultural and organizational challenge requiring both infrastructure and trust to overcome.
AI targeting systems have been deployed across recent operations in Ukraine, the Middle East, and other regions. The Pentagon used AI tools early in the war to share targeting information with Ukraine and in 2024 strikes against Syria, Iraq, the Houthis in Yemen, and facilities in Iran. Central Command has publicly acknowledged this use, reflecting newfound confidence and openness about the technology.
The Maven Smart System provides advanced analytics for target development, processing battlefield data to produce detailed targeting packages including location, elevation, and target descriptions. The system's Target Workbench platform enables analysts to recommend weapons and engagement sequences. However, Pentagon officials maintain a clear policy distinction: AI aids human decisions by generating "points of interest" and courses of action, but humans retain ultimate control over all final strike authorizations. This human-in-the-loop approach is designed to maintain accountability and ethical standards.
Manson explains that the Pentagon is well aware of AI's reliability concerns. AI systems can hallucinate, producing nonsensical outputs, and are susceptible to bias. Algorithmic drift means systems become less accurate over time, posing serious risks in high-stakes military contexts.
Research shows chatbots often reinforce aggressive choices rather than counseling restraint, introducing escalation risks. Kelly and Manson discuss how these tendencies echo scenarios from the 1980s film "War Games," where automated systems without human judgment lead to dangerous escalation. A Pentagon official notes the importance of how personnel ask AI questions—such as assessing whether an action is sensible or legal—cannot be overstated.
The Pentagon has implemented guardrails in AI prompts to assess escalation risks, and officials claim careful prompt design can mitigate errors. However, Manson stresses these safeguards require continuous operational validation to ensure effectiveness in real-world environments. Lingering uncertainty remains about how vigorously the administration is prioritizing policy and technical controls relative to rapid AI deployment. As AI capabilities expand, balancing military benefits with responsible governance remains a tense and evolving challenge requiring resilient human oversight and steady policy attention.
1-Page Summary
By 2017, the United States was deeply engaged in protracted conflicts in Afghanistan and Iraq, wars that were supposed to be winding down but had instead evolved into persistent military campaigns. Simultaneously, U.S. forces were fighting ISIS, expanding the scope and duration of American military commitments. The extended nature of these "forever wars" highlighted the urgent need for modernization within the military’s ranks.
Senior leaders in the intelligence and defense communities grew increasingly concerned about future conflict scenarios, particularly with the rise of China as a technological and military competitor. They recognized that commercial industries in the U.S. were rapidly advancing in AI technologies, utilizing big data in ways the Pentagon was lagging behind. There was mounting pressure for the defense sector to catch up not only to maintain technological parity but to gain an edge, modernize its arsenal, and address new strategic threats.
Key Pentagon officials, such as Marine Corps colonel Drew Cukor, viewed the central problem of warfare as rooted in human limitations: inefficiency, susceptibility to fatigue, and vulnerability to casualties. Cukor and other leaders concluded that combining human effort with machine capabilities—or moving towards machine autonomy—would revolutionize warfare. The adoption of AI was seen as a means to augment, and eventually replace, human operators on the battlefield. The ultimate goal was to achieve autonomy by taking humans off the frontline and delivering overwhelming ...
Origins and Motivation For Pentagon's 2017 Ai Military Project Maven
Artificial intelligence in the military has faced notable setbacks and transformations. Early attempts at integrating AI into battlefield decisions were marred by technical limitations, but targeted improvements have gradually built confidence in and reliance on these systems.
Katrina Manson explains that initial AI algorithms were trained on civilian objects, such as wedding cake tiers, bridal veils, and groom's suits. When these models were repurposed for battlefield environments, their familiarity with civilian patterns failed to translate effectively to military target identification.
This led to frequent misclassifications. The AI systems would mistake trees for people, rocks for buildings, and even identify clouds as school buses. Such errors were not merely technical nuisances—they eroded trust among military operators who depended upon these systems in life-or-death scenarios.
The repeated classification failures caused frustration and, as Katrina Manson notes, “fury and a lack of take-up.” Operators simply stopped using the AI, deeming it ineffective. Colonel Drew Cukor and others described the technology as worthless in the eyes of end users.
To address operator skepticism, the Pentagon sent experienced drone analysts to encourage personnel to consider the potential benefits of AI and keep engaging with developing systems—to prevent complete abandonment of the initiative while improvements were underway.
Progress emerged with algorithmic advances. One of the first breakthroughs occurred when AI detected a person hiding faster than any human could. In a separate critical incident, AI identified a farmer walking across a field, enabling soldiers to call off a strike in time—a distinction that took humans 40 seconds, but AI detected instantaneously.
Another important improvement was the AI's ability to distinguish friendly Marines amid the chaos of battle. The system quickly identified and counted Marines, con ...
Technical Challenges and Evolution of Ai in Military: From Failures to Progress
Artificial intelligence is becoming increasingly integrated into military operations, with recent high-profile deployments in conflicts across Ukraine, the Middle East, and other regions. The Pentagon’s use of AI shows both the rapid advancement of these technologies and a new level of public transparency around their implementation.
The Pentagon has deployed AI targeting systems in a range of recent operations. Early in the war, AI tools were used to share targeting information with Ukraine, marking one of the first operational deployments of this technology in an active conflict. These systems supported Ukraine by providing advanced analysis and targeting info as the hostilities escalated in 2022.
AI tools have also been utilized in more recent U.S. military operations, such as the 2024 strikes against Syria and Iraq, as well as in actions targeting the Houthis in Yemen and facilities in Iran. In each case, AI-supported systems contributed by processing battlefield data and supporting target selection efforts.
Central Command (CENTCOM) has publicly acknowledged its use of AI tools, reflecting a new era of openness and apparent confidence in the technology. CENTCOM officials have explained that AI systems are actively generating “points of interest”—a military term encompassing the preparatory steps taken before finalizing a target for engagement.
Military spokespeople emphasize that AI is not responsible for making autonomous firing decisions. Instead, AI systems are used to identify and develop targets by analyzing locations, elevations, and producing target descriptions, while human operators retain ultimate control and responsibility for strike authorization.
A centerpiece of the Pentagon’s AI arsenal is the Maven Smart System, which provides advanced analytics for target development. This system processes incoming battlefield data to produce detailed targeting packages, including precise location, elevation, and comprehensive target descriptions.
The Maven Smart System works via the Target Workbench platform, which enables analysts to develop not only a target profile but to recommend the weapon to be used and t ...
Ai In Military: Recent Uses in Ukraine, Middle East, Other Conflicts
The integration of artificial intelligence (AI) in military operations raises critical concerns about the reliability of AI systems, their tendency toward escalatory behavior in crisis scenarios, and the ongoing challenge of balancing rapid technological advancement with effective policy and oversight.
Katrina Manson explains that the Pentagon is well aware that AI can make mistakes. AI systems are prone to hallucinations—producing nonsensical or fabricated outputs. In addition to these errant behaviors, AI is susceptible to bias, inheriting or amplifying prejudices present in training data or system design. Over time, all these factors reduce the reliability of AI in high-stakes contexts such as military decision-making.
Algorithmic drift further complicates the reliability of military AI. As Manson emphasizes, algorithms tend to become less accurate over time. This drift means an AI system that was once well-calibrated for certain scenarios may degrade, leading to increased risks of error in evolving environments. For military applications that demand unfailing precision, this drift is a serious liability.
Research cited by advisors to the Pentagon shows that chatbots can display escalatory tendencies, often agreeing with aggressive suggestions instead of counseling restraint. This means that, in simulated dialogues or decision-making rooms, AI might consistently reinforce a user's urge to escalate rather than de-escalate, introducing heightened risks of conflict.
Mary Louise Kelly and Katrina Manson discuss how these tendencies echo the scenario from the 1980s film "War Games," where automated systems without human judgment lead to dangerous escalation. A Pentagon official notes that while they are not "building the Whopper," the importance of how military personnel ask AI questions—such as gauging the sensibility or legality of an action—cannot be overstated. Without robust safeguards, automated systems risk bypassing the careful checks that human planners would employ.
The Pentagon has recognized these AI escalation risks and is working to implement defense mechanisms in the form of guardrails. These are embedded in the AI prompts to assess whether a proposed action could be escalatory, essentially reminding the system (and the user) to check for unnecessary escalation before proceeding.
Officials claim that these guardrails can reliably rein in AI errors and excessive aggressiveness when prompt design is ha ...
Ai Warfare: Limitations, Risks, and Need For Human Oversight
Download the Shortform Chrome extension for your browser
