Generated by RSStT

Generated by RSStT

Anthropic News

Anthropic支持加州法案SB 53,该法案规范由Anthropic等前沿AI开发者构建的强大AI系统。我们长期倡导有针对性的AI监管,对该法案的支持是在认真考虑了加州此前AI监管尝试(SB 1047)的经验教训后做出的。虽然我们认为前沿AI安全最好由联邦层面统一解决,而非由各州零散监管,但强大的AI进展不会等待华盛顿达成共识。

加州州长纽森组建了加州联合政策工作组[https://www.cafrontieraigov.org/],由学者和行业专家组成,提供AI治理建议。该工作组支持“信任但核实”的方法,参议员Scott Wiener提出的SB 53通过信息披露要求来落实这一原则,避免了去年监管尝试中存在的过于具体的技术强制规定。

SB 53的主要内容:

  • 要求开发最强大AI系统的大型公司制定并公布安全框架,说明如何管理、评估和减轻可能导致大规模伤亡或重大经济损失的灾难性风险。
  • 发布公开透明报告,概述灾难性风险评估及为履行安全框架所采取的措施,且在部署强大新模型前完成。
  • 在15天内向州政府报告关键安全事件,并可机密披露内部部署模型潜在灾难性风险的评估摘要。
  • 提供明确的举报人保护,涵盖违反上述要求及对公共健康/安全构成重大危险的情况。
  • 对其安全框架中的承诺承担公开责任,否则将面临罚款。

这些要求将使Anthropic及其他前沿AI公司已在执行的做法正式化。Anthropic发布了《负责任扩展政策》,详细说明如何评估和减轻模型能力提升带来的风险。我们还发布了详尽的系统说明,记录模型能力和局限。谷歌DeepMind、OpenAI、微软等其他前沿实验室也采用了类似方法。SB 53将使所有受监管模型必须符合法律标准。该法案重点针对开发最强大AI系统的大型公司,对初创和小型公司给予豁免,避免不必要的监管负担。

SB 53的透明度要求对前沿AI安全意义重大。没有这些要求,拥有更强大模型的实验室可能会因竞争压力而减少安全和披露措施。但有了SB 53,开发者可以在确保对公共安全风险保持透明的同时公平竞争,披露成为强制而非可选。

展望未来:

  • 该法案目前以训练所用计算能力(FLOPS)决定监管对象,现定阈值为10^26 FLOPS,是合理起点,但仍有部分强大模型可能未被覆盖。
  • 开发者应被要求提供更多关于测试、评估和缓解措施的细节。我们通过前沿模型论坛与业界共享安全研究、红队测试和部署决策,增强了工作效果。
  • 监管需随着AI技术进步而演进,监管机构应有权根据新发展调整规则,保持安全与创新的平衡。

我们赞赏参议员Wiener和州长纽森在负责任AI治理上的领导力。问题不在于是否需要AI治理,而在于我们是今天主动制定,还是明天被动应对。SB 53为前者提供了坚实路径。我们鼓励加州通过该法案,并期待与华盛顿及全球政策制定者合作,制定保护公共利益同时保持美国AI领导地位的综合方案。





Anthropic is endorsing SB 53, the California bill that governs powerful AI systems built by frontier AI developers like Anthropic. We’ve long advocated for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California's previous attempt at AI regulation (SB 1047). While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.

Governor Newsom assembled the Joint California Policy Working Group—a group of academics and industry experts—to provide recommendations on AI governance. The working group endorsed an approach of 'trust but verify’, and Senator Scott Wiener’s SB 53 implements this principle through disclosure requirements rather than the prescriptive technical mandates that plagued last year's efforts.

What SB 53 achieves

SB 53 would require large companies developing the most powerful AI systems to:

  • Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.
  • Release public transparency reports summarizing their catastrophic risk assessments and the steps taken to fulfill their respective frameworks before deploying powerful new models.
  • Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.
  • Provide clear whistleblower protections that cover violations of these requirements as well as specific and substantial dangers to public health/safety from catastrophic risk.
  • Be publicly accountable for the commitments made in their frameworks or face monetary penalties.

These requirements would formalize practices that Anthropic and many other frontier AI companies already follow. At Anthropic, we publish our Responsible Scaling Policy, detailing how we evaluate and mitigate risks as our models become more capable. We release comprehensive system cards that document model capabilities and limitations. Other frontier labs (Google DeepMind, OpenAI, Microsoft) have adopted similar approaches while vigorously competing at the frontier. Now all covered models will be legally held to this standard. The bill also appropriately focuses on large companies developing the most powerful AI systems, while providing exemptions for startups and smaller companies that are less likely to develop powerful models and should not bear unnecessary regulatory burdens.

SB 53’s transparency requirements will have an important impact on frontier AI safety. Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete. But with SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional.

Looking ahead

SB 53 provides a strong regulatory foundation, but we can and should build upon this progress in the following areas and we look forward to working with policymakers to do so:

  • The bill currently decides which AI systems to regulate based on how much computing power (FLOPS) was used to train them. The current threshold (10^26 FLOPS) is an acceptable starting point but there’s always a risk that some powerful models may not be covered.
  • Similarly, developers should be required to provide greater detail about the tests, evaluations, and mitigations they undertake. When we share our safety research, document our red team testing, and explain our deployment decisions—as we have done alongside industry players via the Frontier Model Forum —it strengthens rather than weakens our work.
  • Lastly, regulations need to evolve as AI technology advances. Regulators should have the ability to update rules as needed to keep up with new developments and maintain the right balance between safety and innovation.

We commend Senator Wiener and Governor Newsom for their leadership on responsible AI governance. The question isn't whether we need AI governance—it's whether we'll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former. We encourage California to pass it, and we look forward to working with policymakers in Washington and around the world to develop comprehensive approaches that protect public interests while maintaining America's AI leadership.



Generated by RSStT. The copyright belongs to the original author.

Source

Report Page