Pentagon Threatens Contract Over AI Use Limits as Anthropic Holds Firm
A significant policy dispute is simmering between the U.S. Department of Defense and leading artificial intelligence firm Anthropic, according to a new report. The Pentagon is pressing AI developers, including Anthropic, OpenAI, Google, and xAI, to grant the military broad rights to employ their technology for "all lawful purposes." While at least one company has reportedly agreed and others are negotiating, Anthropic has emerged as the most steadfast opponent of the blanket terms.
The standoff has escalated, with Defense officials now threatening to cancel a $200 million contract with Anthropic if no agreement is reached. This tension isn't new; as early as January, The Wall Street Journal noted major disagreements over permissible uses for Anthropic's Claude AI. That report also claimed Claude was utilized in the military operation that led to the capture of former Venezuelan President Nicolás Maduro.
Anthropic, when contacted by Axios, stated it has "not discussed the use of Claude for specific operations with the Department of War." The company clarified that negotiations center on its core usage policy, specifically its "hard limits around fully autonomous weapons and mass domestic surveillance." The firm appears unwilling to bend these foundational rules, even under financial pressure from one of the world's largest potential clients. The outcome of this clash could set a critical precedent for how advanced AI is integrated into national defense.
Original source
Read on TechCrunch