Tony Kim
Feb 24, 2026 20:48
Anthropic releases third model of Accountable Scaling Coverage, separating firm commitments from industry-wide suggestions after 2.5 years of testing.
Anthropic has launched the third iteration of its Accountable Scaling Coverage, marking a big restructuring of how the AI firm approaches catastrophic danger mitigation after two and a half years of real-world implementation.
The replace, printed February 24, 2026, introduces three main modifications: a transparent separation between what Anthropic can obtain alone versus what requires industry-wide motion, a brand new Frontier Security Roadmap with public accountability metrics, and obligatory exterior evaluate of Danger Stories below sure situations.
What Truly Modified
Probably the most notable shift? Anthropic is now overtly admitting that some security measures merely can’t be carried out by a single firm. The earlier RSP’s higher-tier safeguards (ASL-4 and past) had been left deliberately obscure—seems that wasn’t simply warning, it was as a result of attaining them unilaterally could also be unimaginable.
A RAND report cited by Anthropic states that “SL5” safety requirements geared toward stopping top-tier cyber threats are “at the moment not doable” and “will possible require help from the nationwide safety group.”
Moderately than water down these necessities to make compliance straightforward, Anthropic selected to restructure completely. The brand new RSP now explicitly maps out two tracks: commitments the corporate will meet no matter exterior components, and suggestions it believes the whole AI {industry} must undertake.
The Trustworthy Evaluation
Anthropic’s autopsy on RSP variations 1 and a couple of is refreshingly candid. What labored: the coverage compelled inside groups to deal with security as a launch requirement, and rivals like OpenAI and Google DeepMind adopted related frameworks inside months. ASL-3 safeguards had been efficiently activated in Might 2025.
What did not work: functionality thresholds proved way more ambiguous than anticipated. Organic danger evaluation gives a telling instance—fashions now cross most fast checks, making it arduous to argue dangers are low, however outcomes aren’t definitive sufficient to show dangers are excessive both. By the point wet-lab trials full, extra highly effective fashions have already shipped.
The political atmosphere hasn’t helped. Federal safety-oriented discussions have stalled as coverage focus shifted towards AI competitiveness and financial progress.
New Accountability Mechanisms
The Frontier Security Roadmap introduces particular, publicly-graded objectives together with “moonshot R&D” tasks for info safety, automated red-teaming techniques that exceed present bug bounty contributions, and complete information of all essential AI growth actions—analyzed by AI for insider threats.
Danger Stories will publish each 3-6 months, explaining how capabilities, risk fashions, and mitigations match collectively. Exterior reviewers with “unredacted or minimally-redacted entry” will publicly critique Anthropic’s reasoning.
The corporate is already working pilots regardless of present fashions not but triggering the exterior evaluate requirement.
Business Implications
This restructuring arrives as AI governance frameworks face rising scrutiny. California’s SB 53, New York’s RAISE Act, and the EU AI Act’s Codes of Observe have all begun requiring frontier builders to publish catastrophic danger frameworks—necessities Anthropic addresses by its current Frontier Compliance Framework.
Whether or not rivals comply with Anthropic’s lead on separating unilateral commitments from {industry} suggestions stays to be seen. The strategy primarily acknowledges that voluntary self-regulation has limits, whereas positioning the corporate to advocate for coordinated authorities motion with out showing to demand guidelines it may possibly’t comply with itself.
For the broader AI sector, Anthropic’s clear acknowledgment of what single corporations can not obtain alone might show extra influential than the technical coverage particulars themselves.
Picture supply: Shutterstock







