Artificial intelligence firm Anthropic has loosened its core safety commitments, abandoning hard guardrails that once defined its approach to AI development, according to a company blog post cited by CNN.
The company replaced its binding Responsible Scaling Policy with a more flexible framework, arguing that strict self-imposed limits could leave responsible developers behind as rivals push ahead.
The change comes as Anthropic faces mounting competition and an escalating dispute with the Pentagon.
We're updating our Responsible Scaling Policy to its third version.
— Anthropic (@AnthropicAI) February 24, 2026
Since it came into effect in 2023, we’ve learned a lot about the RSP’s benefits and its shortcomings. This update improves the policy, reinforcing what worked and committing us to even greater transparency.
Defense Secretary Pete Hegseth has reportedly warned Anthropic that it could lose a $200 million Defense Department contract if it does not relax certain AI safeguards.
The Pentagon has also threatened to classify the company as a supply chain risk. Anthropic said its prior policy failed to create an industry-wide “race to the top” on safety.
While the company insists it still opposes AI-controlled weapons and mass domestic surveillance, critics warn the shift weakens voluntary restraints at a pivotal moment in the AI arms race.
Also Read:





