Report: U.S. Military Deployed Anthropic's Claude AI in Covert Venezuela Operation

According to a Wall Street Journal report, the U.S. military utilized Anthropic's AI model, Claude, during a controversial operation in Venezuela. The mission, which Venezuelan authorities say involved bombing in Caracas and resulted in dozens of fatalities, marks a significant and public breach of the AI firm's own terms of service. Anthropic explicitly prohibits using Claude for violence, weaponry, or surveillance.
The Journal, citing anonymous sources, indicated the AI was accessed through Anthropic's partnership with defense contractor Palantir Technologies. Neither Anthropic, Palantir, nor the Pentagon commented directly on the report. This incident represents the first known use of a major AI developer's technology in a classified U.S. military operation, raising immediate ethical and policy questions.
Anthropic's leadership, including CEO Dario Amodei, has publicly advocated for strict regulation to prevent AI harms and expressed specific concern over autonomous lethal operations. This cautious position has reportedly created tension with the Defense Department. Earlier this year, Secretary of War Pete Hegseth stated the military would not use AI models that restrict warfighting capabilities.
The Pentagon has since announced a partnership with Elon Musk's xAI and uses tailored versions of other major AI systems. The reported use of Claude in a live combat scenario illustrates the rapid and contentious integration of commercial AI into modern warfare, a trend also seen with Israeli forces in Gaza and previous U.S. strikes in the Middle East. Critics continue to warn that ceding life-and-death decisions to algorithms invites catastrophic error.
Original source
Read on The Guardian