TechCrunch

Anthropic's Safety Pledge Collides With Reality, Costing It $200 Million Pentagon Deal

Share:

The Trump administration moved swiftly on Friday, severing federal ties with AI firm Anthropic and directing all agencies to cease using its technology. The immediate catalyst was founder Dario Amodei’s refusal to allow Anthropic’s systems to be used for domestic mass surveillance or autonomous weaponry. The Pentagon, invoking national security law, blacklisted the company, putting a defense contract worth up to $200 million in jeopardy.

For MIT physicist Max Tegmark, a longtime voice on AI governance, this crisis was predictable. "The road to hell is paved with good intentions," he said in an interview. He argues Anthropic and its peers—OpenAI, Google DeepMind, xAI—built their own trap by fiercely resisting binding regulation while marketing themselves as safety-first.

That strategy has unraveled. This week, Anthropic abandoned a core safety pledge: not to release powerful AI systems until confident they wouldn’t cause harm. It follows Google dropping its "do no harm" AI principle, OpenAI removing "safety" from its mission statement, and xAI disbanding its safety team.

"We right now have less regulation on AI systems in America than on sandwiches," Tegmark notes. "If you want to open a sandwich shop and the health inspector finds rats, he shuts you down. But if you sell AI girlfriends linked to teen suicides, or something you call ‘superintelligence’ that might overthrow the government, the inspector has to say, ‘Fine, go ahead.’"

Tegmark contends this regulatory vacuum, which the companies helped create, now leaves them exposed. Without laws setting clear boundaries, the government can demand anything. The common industry argument—that the U.S. must outpace China—also falters, he says. China is moving to ban certain AI applications it sees as destabilizing, and no government, American or Chinese, would tolerate an AI that threatens its control.

The immediate question is who follows Anthropic’s lead. After the blacklist announcement, OpenAI’s Sam Altman said he shared the same red lines. Google and xAI remained silent. "This is a moment where everybody has to show their true colors," Tegmark said.

He sees a potential positive path: treating AI like other high-risk industries, requiring proof of safety before release. "Then we get a golden age with all the good stuff, without the existential angst," he said. "That’s not the path we’re on. But it could be."