Generated by RSStT

Generated by RSStT

Anthropic News

今天,白宫发布了《赢得竞赛:美国的人工智能行动计划》——一项旨在保持美国在人工智能发展优势的综合战略。我们对该计划聚焦于加速人工智能基础设施建设和联邦政府采用,以及加强安全测试和安全协调表示鼓舞。计划中的许多建议反映了Anthropic对科学技术政策办公室(OSTP)此前信息征求请求的回应。

虽然该计划为美国的人工智能进步奠定了基础,但我们认为严格的出口管制和人工智能开发透明度标准仍是确保美国人工智能领导地位的关键下一步。

加速人工智能基础设施建设和采用

行动计划优先考虑人工智能基础设施和采用,这与Anthropic三月份向OSTP提交的建议一致。

我们赞赏政府致力于简化数据中心和能源许可流程,以满足人工智能的电力需求。正如我们在OSTP提交的文件和宾夕法尼亚能源与创新峰会上所述,没有足够的国内能源容量,美国的人工智能开发者可能被迫将业务迁往海外,可能使敏感技术暴露于外国对手。我们最近发布的《在美国建设人工智能》报告详细说明了政府可以采取的步骤,以加速国家人工智能基础设施的建设,我们期待与政府合作,推动扩大国内能源容量。

计划中关于增加联邦政府采用人工智能的建议也与Anthropic向白宫提出的政策优先事项和建议高度契合,包括:

  • 指派管理和预算办公室(OMB)解决资源限制、采购限制和联邦人工智能采用的项目障碍。
  • 启动信息征求请求(RFI),识别阻碍人工智能创新的联邦法规,由OMB协调改革工作。
  • 更新联邦采购标准,消除阻碍机构部署人工智能系统的障碍。
  • 通过公私合作促进国防和国家安全领域的人工智能采用。

普及人工智能的利益

我们支持行动计划确保广泛参与和受益于人工智能持续发展和部署的重点。

行动计划继续推进国家人工智能研究资源(NAIRR)试点项目,确保全国各地的学生和研究人员能够参与并推动人工智能前沿的发展。我们长期支持NAIRR,并为与该试点项目的合作感到自豪。此外,行动计划强调为失业工人提供快速再培训项目和人工智能预学徒项目,认识到以往技术转型中的错误,体现了将人工智能利益惠及所有美国人的承诺。

配合这些提议的是我们对人工智能如何以及将如何改变经济的研究。经济指数和经济未来项目旨在为研究人员和政策制定者提供数据和工具,确保人工智能的经济利益得到广泛分享,风险得到适当管理。

促进安全的人工智能发展

未来几年将开发出强大的人工智能系统。该计划强调防范强大人工智能模型滥用和为未来人工智能相关风险做准备,十分恰当且优秀。我们特别赞赏政府优先支持人工智能可解释性、控制系统和对抗鲁棒性研究。这些是必须支持的重要研究方向,有助于我们应对强大人工智能系统。

我们很高兴行动计划肯定了国家标准与技术研究院(NIST)人工智能标准与创新中心(CAISI)在评估前沿模型国家安全问题方面的重要工作,期待继续与其紧密合作。我们鼓励政府继续投资CAISI。正如我们在提交材料中指出,先进人工智能系统在与生物武器开发相关的能力上表现出令人担忧的提升。CAISI在开发测试和评估能力以应对这些风险方面发挥了领导作用。我们建议将这些努力聚焦于人工智能系统可能带来的最独特和最紧迫的国家安全风险。

建立国家标准的必要性

除了测试之外,我们认为基本的人工智能开发透明度要求,如公开安全测试和能力评估报告,对于负责任的人工智能开发至关重要。领先的人工智能模型开发者应遵守基本且可公开验证的标准,以评估和管理其系统可能带来的灾难性风险。我们提出的前沿模型透明度框架正聚焦于这些风险。我们希望报告在这方面能做得更多。

包括Anthropic、OpenAI和Google DeepMind在内的领先实验室已经实施了自愿安全框架,证明负责任的开发与创新可以共存。事实上,随着Claude Opus 4的发布,我们主动启动了ASL-3保护措施,以防止其被用于化学、生物、放射性和核武器(CBRN)开发。这一预防性步骤表明,强有力的安全保护不仅不会阻碍创新,反而有助于我们构建更好、更可靠的系统。

我们与政府一样担心过度规定的监管方法会导致法律碎片化和负担过重。理想情况下,这些透明度要求应由政府通过统一的国家标准制定。然而,鉴于我们认为对州级人工智能法律实施十年禁令过于粗暴,我们继续反对旨在阻止各州在联邦政府未采取行动时,为保护其公民免受强大人工智能系统潜在危害而制定措施的提案。

保持严格的出口管制

行动计划指出,“拒绝我们的外国对手获得先进人工智能计算资源……既是地缘战略竞争,也是国家安全问题。”我们对此深表赞同。这也是我们对政府最近允许向中国出口Nvidia H20芯片的决定表示担忧的原因。

人工智能发展遵循规模定律:系统的智能和能力由训练期间计算、能量和数据输入的规模决定。虽然这些规模定律依然有效,最新且最强大的推理模型表明,人工智能能力随着推理阶段可用计算量的增加而提升。推理阶段的计算量受芯片内存带宽限制。尽管H20的原始计算能力被华为制造的芯片超越,但正如商务部长Lutnick和副部长Kessler最近所证实,华为仍面临产量问题,且没有国产芯片能匹配H20的内存带宽。

因此,H20提供了中国企业无法获得的独特且关键的计算能力,将弥补中国在人工智能芯片上的重大短缺。允许向中国出口H20将浪费延续美国人工智能主导地位的机会,正值新一轮竞争开始。此外,美国人工智能芯片的出口不会阻止中国共产党追求人工智能技术自给自足的目标。

为此,我们强烈建议政府维持对H20芯片的出口管制。这些管制符合行动计划中推荐的出口管制措施,对于保障和扩大美国的人工智能领先地位至关重要。

展望未来

我们许多建议与人工智能行动计划的高度一致,体现了对人工智能变革潜力的共同理解以及维持美国领导地位所需紧迫行动的共识。

我们期待与政府合作,落实这些举措,同时确保对灾难性风险给予适当关注,并保持严格的出口管制。我们共同努力,确保强大人工智能系统在美国由美国公司安全开发,体现美国的价值观和利益。

有关我们政策建议的更多详情,请参阅我们向OSTP提交的完整回应,以及我们在负责任人工智能开发和增加国内能源容量方面的持续工作和最新报告。





Today, the White House released "Winning the Race: America's AI Action Plan"—a comprehensive strategy to maintain America's advantage in AI development. We are encouraged by the plan’s focus on accelerating AI infrastructure and federal adoption, as well as strengthening safety testing and security coordination. Many of the plan’s recommendations reflect Anthropic’s response to the Office of Science and Technology Policy’s (OSTP) prior request for information. While the plan positions America for AI advancement, we believe strict export controls and AI development transparency standards remain crucial next steps for securing American AI leadership.

Accelerating AI infrastructure and adoption

The Action Plan prioritizes AI infrastructure and adoption, consistent with Anthropic’s submission to OSTP in March.

We applaud the Administration's commitment to streamlining data center and energy permitting to address AI’s power needs. As we stated in our OSTP submission and at the Pennsylvania Energy and Innovation Summit, without adequate domestic energy capacity, American AI developers may be forced to relocate operations overseas, potentially exposing sensitive technology to foreign adversaries. Our recently published “Build AI in America” report details the steps the Administration can take to accelerate the buildout of our nation’s AI infrastructure, and we look forward to working with the Administration on measures to expand domestic energy capacity.

The Plan’s recommendations to increase the federal government's adoption of AI also includes proposals that are closely aligned with Anthropic’s policy priorities and recommendations to the White House. These include:

  • Tasking the Office of Management and Budget (OMB) to address resource constraints, procurement limitations, and programmatic obstacles to federal AI adoption.
  • Launching an Request for Information (RFI) to identify federal regulations that impede AI innovation, with OMB coordinating reform efforts.
  • Updating federal procurement standards to remove barriers that prevent agencies from deploying AI systems.
  • Promoting AI adoption across defense and national security applications through public-private collaboration.

Democratizing AI’s benefits

We are aligned with the Action Plan’s focus on ensuring broad participation in and benefit from AI’s continued development and deployment.

The Action Plan’s continuation of the National AI Research Resource (NAIRR) pilot ensures that students and researchers across the country can participate in and contribute to the advancement of the AI frontier. We have long supported the NAIRR and are proud of our partnership with the pilot program. Further, the Action Plan’s emphasis on rapid retraining programs for displaced workers and pre-apprenticeship AI programs recognizes the errors of prior technological transitions and demonstrates a commitment to delivering AI’s benefits to all Americans.

Complementing these proposals are our efforts to understand how AI is transforming -- and how it will transform -- our economy. The Economic Index and the Economic Futures Program aim to provide researchers and policymakers with the data and tools they need to ensure AI’s economic benefits are broadly shared and risks are appropriately managed.

Promoting secure AI development

Powerful AI systems are going to be developed in the coming years. The plan’s emphasis on defending against the misuse of powerful AI models and preparing for future AI related risks is appropriate and excellent. In particular, we commend the administration’s prioritization of supporting research into AI interpretability, AI control systems, and adversarial robustness. These are important lines of research that must be supported to help us deal with powerful AI systems.

We're glad the Action Plan affirms the National Institute of Standards and Technology's Center for AI Standards and Innovation’s (CAISI) important work to evaluate frontier models for national security issues and we look forward to continuing our close partnership with them. We encourage the Administration to continue to invest in CAISI. As we noted in our submission, advanced AI systems are demonstrating concerning improvements in capabilities relevant to biological weapons development. CAISI has played a leading role in developing testing and evaluation capabilities to address these risks. We encourage focusing these efforts on the most unique and acute national security risks that AI systems may pose.

The Need for a National Standard

Beyond testing, we believe basic AI development transparency requirements, such as public reporting on safety testing and capability assessments, are essential for responsible AI development. Leading AI model developers should be held to basic and publicly-verifiable standards of assessing and managing the catastrophic risks posed by their systems. Our proposed framework for frontier model transparency focuses on these risks. We would have liked to see the report do more on this topic.

Leading labs, including Anthropic, OpenAI, and Google DeepMind, have already implemented voluntary safety frameworks, which demonstrates that responsible development and innovation can coexist. In fact, with the launch of Claude Opus 4, we proactively activated ASL-3 protections to prevent misuse for chemical, biological, radiological, and nuclear (CBRN) weapons development. This precautionary step shows that far from slowing innovation, robust safety protections help us build better, more reliable systems.

We share the Administration’s concern about overly-prescriptive regulatory approaches creating an inconsistent and burdensome patchwork of laws. Ideally, these transparency requirements would come from the government by way of a single national standard. However, in line with our stated belief that a ten-year moratorium on state AI laws is too blunt an instrument, we continue to oppose proposals aimed at preventing states from enacting measures to protect their citizens from potential harms caused by powerful AI systems, if the federal government fails to act.

Maintaining strong export controls

The Action Plan states that “denying our foreign adversaries access to [Advanced AI compute] . . . is a matter of both geostrategic competition and national security.” We strongly agree. That is why we are concerned with the Administration’s recent reversal on export of the Nvidia H20 chips to China.

AI development has been defined by scaling laws: the intelligence and capability of a system is defined by the scale of its compute, energy, and data inputs during training. While these scaling laws continue to hold, the newest and most capable reasoning models have demonstrated that AI capability scales with the amount of compute made available to a system working on a given task, or “inference.” The amount of compute made available during inference is limited by a chip’s memory bandwidth. While the H20’s raw computing power is exceeded by chips made by Huawei, as Commerce Secretary Lutnick and Under Secretary Kessler recently testified, Huawei continues to struggle with production volume and no domestically-produced Chinese chip matches the H20’s memory bandwidth.

As a result, the H20 provides unique and critical computing capabilities that would otherwise be unavailable to Chinese firms, and will compensate for China’s otherwise major shortage of AI chips. To allow export of the H20 to China would squander an opportunity to extend American AI dominance just as a new phase of competition is starting. Moreover, exports of U.S. AI chips will not divert the Chinese Communist Party from its quest for self-reliance in the AI stack.

To that end, we strongly encourage the Administration to maintain controls on the H20 chip. These controls are consistent with the export controls recommended by the Action Plan and are essential to securing and growing America’s AI lead.

Looking ahead

The alignment between many of our recommendations and the AI Action Plan demonstrates a shared understanding of AI's transformative potential and the urgent actions needed to sustain American leadership.

We look forward to working with the Administration to implement these initiatives while ensuring appropriate attention to catastrophic risks and maintaining strong export controls. Together, we can ensure that powerful AI systems are developed safely in America, by American companies, reflecting American values and interests.

For more details on our policy recommendations, see our full submission to OSTP, and our ongoing work on responsible AI development and our recent report on increasing domestic energy capacity.



Generated by RSStT. The copyright belongs to the original author.

Source

Report Page