Usage Policy Update
Anthropic News今天,我们分享了对使用政策的一些更新,以反映我们产品日益增强的能力和不断演变的使用方式。我们的使用政策为Claude的使用提供了框架,明确指导所有使用Anthropic产品的用户如何正确使用和避免不当使用。
在本次更新中,我们的目标是基于用户反馈、产品变化、监管发展以及执法重点,提供更清晰、更详细的政策说明。这些变更将于2025年9月15日生效。
以下是部分变更的摘要,您也可以点击这里查看新的使用政策:[https://www.anthropic.com/legal/aup]。
应对网络安全和代理使用
过去一年,我们见证了代理能力的快速发展。我们发布了自己的代理工具,如Claude Code和Computer Use,我们的模型也驱动了许多世界领先的编码代理。
这些强大能力带来了新的风险,包括大规模滥用、恶意软件创建和网络攻击的潜在可能,详见我们首份威胁情报报告《检测与应对Claude的恶意使用:2025年3月》[https://www.anthropic.com/news/detecting-and-countering-malicious-uses-of-claude-march-2025]。
为应对这些风险,我们在使用政策中新增了关于恶意计算机、网络和基础设施破坏活动的禁止条款。我们继续支持加强网络安全的使用场景,例如在系统所有者同意下发现漏洞。
此外,我们在帮助中心发布了新文章[https://support.anthropic.com/en/articles/12005017-using-agents-according-to-our-usage-policy],介绍使用政策如何更广泛地适用于代理使用。该补充指导提供了代理环境中禁止活动的具体示例,但不替代或取代我们的使用政策。
重新审视对政治内容的广泛限制
我们的使用政策历史上对所有类型的游说或竞选内容均有广泛禁止。我们认为鉴于AI生成内容对民主进程影响的未知风险,这一立场是适当的,这些风险依然是我们严肃对待的问题。
我们听取了用户反馈,认为这种一刀切的做法限制了Claude在政策研究、公民教育和政治写作中的合法使用。现在,我们针对性地禁止那些具有欺骗性或破坏民主进程的使用案例,以及涉及选民和竞选目标定位的活动。这种做法既允许合法的政治讨论和研究,也禁止误导性或侵入性的行为。
更新执法使用的措辞
之前的使用政策中关于执法的语言包含了多种针对后台工具和分析应用的例外,导致有时难以理解哪些使用案例被允许。
为此,我们更新了政策措辞,使其更清晰直接。此次更新不改变允许或禁止的内容,只是更明确地传达了我们现有的立场。我们继续限制监控、追踪、画像和生物识别监测等领域,同时支持已允许的适当后台和分析使用案例。
高风险面向消费者使用案例的要求
我们的高风险使用案例要求适用于具有公共福利和社会公平影响的使用场景,包括法律、金融和就业相关的Claude使用。这些场景需要额外的保障措施,如人工介入监督和AI身份披露。
随着Claude在企业使用场景的扩展,我们明确这些要求仅适用于模型输出面向消费者的情况,而不适用于企业间的业务互动。
展望未来
我们视使用政策为一个动态文档,随着AI风险的演变而不断发展。我们将继续在Anthropic内部以及与外部政策制定者、领域专家和民间社会合作,持续评估和完善我们的政策。
Today, we’re sharing some updates to our Usage Policy that reflect the growing capabilities and evolving usage of our products. Our Usage Policy serves as a framework for how Claude should and shouldn’t be used, providing clear guidance for everyone who uses Anthropic’s products.
In this update, our goal is to provide greater clarity and detail on our Policy based on user feedback, product changes, regulatory developments, and our enforcement priorities. These changes will take effect on September 15, 2025.
Below is a summary of some of the changes, and you can view the new Usage Policy here.
Addressing cybersecurity and agentic use
Over the past year, we’ve seen rapid advances in agentic capabilities. We've released our own agentic tools like Claude Code and Computer Use, and our models power many of the world's leading coding agents.
These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks, as shared in our first threat intelligence report, Detecting and Countering Malicious Uses of Claude: March 2025.
To address these risks, we've added a section to our Usage Policy outlining the malicious computer, network, and infrastructure compromise activities that are prohibited by Anthropic. We continue to support use cases that strengthen cybersecurity, such as discovering vulnerabilities with the system owner's consent.
We’ve also published a new article to our Help Center on how our Usage Policy applies to agentic use more broadly. This supplementary guidance provides concrete examples of prohibited activities in agentic contexts, and is not meant to replace or supersede our Usage Policy.
Revisiting broad restrictions on political content
Our Usage Policy has historically contained broad prohibitions on all types of lobbying or campaign content. We believed this stance was appropriate given the unknown risks of AI-generated content on influencing democratic processes, and these are still prominent risks we take seriously.
We’ve heard from users that this blanket approach also limited legitimate use of Claude for policy research, civic education, and political writing. We're now tailoring our restrictions to specifically prohibit use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting. This approach enables legitimate political discourse and research while prohibiting activity that is misleading or invasive.
Updating our language on law enforcement use
Our previous Usage Policy language on law enforcement included various exceptions for back-office tools and analytical applications, which occasionally made it difficult to understand which use cases were permitted.
To address this, we've updated our policy language to be clearer and more straightforward. This update does not change what is allowed or prohibited – it now communicates our existing stance more clearly. We continue to restrict the same areas of concern, including surveillance, tracking, profiling, and biometric monitoring, while maintaining support for appropriate back-office and analytical use cases that were already permitted.
Requirements for high-risk consumer-facing use cases
Our High-Risk Use Case Requirements apply to use cases that have public welfare and social equity implications, including legal, financial, and employment-related use of Claude. These cases require additional safeguards such as human-in-the-loop oversight and AI disclosure.
As Claude usage has expanded across enterprise use cases, we’re clarifying that these requirements apply specifically when models’ outputs are consumer-facing, and not for business to business interactions.
Looking ahead
We view our Usage Policy as a living document, evolving as AI risks themselves evolve. We will continue to work within Anthropic and with external policymakers, subject matter experts, and civil society to evaluate our policies on an ongoing basis.
Generated by RSStT. The copyright belongs to the original author.