Anthropic's AI, Built for Safety, Now Aids U.S. Intelligence
In 2021, Anthropic was founded on a promise: to build artificial intelligence with safety as its core principle. By 2026, the company’s Claude AI system is assisting U.S. defense and intelligence agencies, according to an NBC News investigation. This shift from safety lab to government partner reveals a fundamental tension within the AI industry.
The change followed a quiet but significant revision to Anthropic’s acceptable use policy in late 2024, removing a blanket ban on military applications. The company now argues that responsible engagement with democratic governments aligns with its safety mission. This paved the way for a deal with Palantir and Amazon Web Services, placing Claude on a platform certified for secret-level national security data.
One specific application involves Venezuela. Reporting indicates Claude has been used to analyze intelligence related to the government of Nicolás Maduro. This supports a broader pressure campaign by the U.S., which has included sanctions and military posturing.
Anthropic is not an outlier. OpenAI, Google, and Microsoft have all pursued defense contracts, drawn by substantial Pentagon budgets and the need for stable revenue. For Anthropic, which spends billions annually on computing, these contracts offer financial footing.
CEO Dario Amodei frames the work as a strategic necessity, suggesting it is better for safety-focused firms to guide U.S. capabilities than to cede the field to rivals like China. Yet, the move has caused unease among some employees who joined under the original ethos, and it tests the company’s branded "constitutional AI" approach.
With binding regulation for military AI largely absent, the boundaries of this technology are being set by corporate policy and government demand. As Claude helps draft reports and connect intelligence dots, Anthropic’s foundational ideals are undergoing a very practical, and consequential, real-world test.
Original source
Read on Webpronews