The Sora feed philosophy

The Sora feed philosophy

OpenAI News

我们的目标很明确:通过 Sora Feed 让人们看到创作的可能性,并激发他们去创造。为实现这一愿景,我们遵循几条核心原则:

  • 优先鼓励创造力。我们在设计排序逻辑时,会偏向鼓励创意和主动参与,而不是被动刷屏——我们认为这正是让 Sora 使用起来愉悦的关键。
  • 让用户掌控体验。 Sora Feed 提供可引导的排序选项,用户可以明确告诉算法自己想看什么。家长也可以通过 ChatGPT parental controls 关闭个性化推荐,并管理青少年的连续滚动功能。
  • 重视连接。我们希望 Sora 能帮助用户强化已有关系并建立新联系——尤其是通过有趣、富有魔力的 Cameo flows 。与他人相关联的内容会比全球性、无关联的内容获得更高优先级。
  • 在安全与自由之间寻求平衡。 Sora Feed 的设计目标是广泛可用且安全。我们从源头设置严格防护,防止生成不安全或有害内容,并会屏蔽可能违反我们 Usage Policies 的内容;与此同时,我们也要为表达、创意和社区留出空间。推荐系统是会随使用不断演进的系统,随着实际使用中的反馈,我们会在这些原则指导下不断调整细节。

个性化推荐如何工作

我们的推荐算法旨在为你和其他用户提供能激发创意的个性化推荐。每个人的兴趣和口味不同,所以我们建立了个性化系统来实现这一目标。

为个性化你的 Sora Feed ,我们可能会考虑的信号包括:

  • 你在 Sora 上的活动:比如你的发布、关注的账号、点赞与评论的帖子、以及被 remix 的内容;还可能包括设备访问 Sora 时的大致位置信息(例如城市),这些基于诸如 IP 地址等信息推断得出。
  • 你的 ChatGPT 数据:我们可能会参考你的 ChatGPT 历史,但你可以在 Sora 的 Data Controls (位于 Settings )中随时关闭该选项。
  • 内容参与信号:例如浏览量、点赞、评论、用户发出的“ see less content like this ”等指令,以及 remix 行为。
  • 作者信号:如粉丝数、其他发布内容及过往帖子互动情况。
  • 安全信号:判断该帖子是否属于违规或是否合适。

我们会用这些信号来预测某条内容是否可能符合你的兴趣并能激发你进一步创作。

另外,家长可以通过 ChatGPT parental controls 关闭个性化推荐,并为青少年账号管理连续滚动功能。

如何在安全与表达之间找到平衡

要让 Sora Feed 对所有人来说既安全又有趣,需要在保护用户免受有害内容侵害和保留创作自由之间走一条细致的路。

我们可能会删除违反 Global Usage Policies 的内容。此外,按照 Sora Distribution Guidelines 的规定,若内容被判断为对用户不适宜,也可能从 Feed 以及其他分享场景(如用户画廊和配角展示)中移除。具体包括但不限于:

  • 露骨的性内容;
  • 露骨暴力或宣扬暴力的内容;
  • 极端主义宣传;
  • 仇恨性内容;
  • 宣扬或描绘自残或饮食失调的内容;
  • 不健康的节食或运动行为;
  • 基于外貌的批评或比较;
  • 霸凌内容;
  • 可能被未成年人模仿的危险挑战;
  • 美化抑郁的内容;
  • 宣传受年龄限制的商品或活动(包括非法药物或有害物质);
  • 以吸引互动为唯一目的的低质量诱导性内容;
  • 在未经同意的情况下再现在世个人的肖像,或在不允许使用的情境下再现已故公众人物的肖像;
  • 可能侵犯他人知识产权的内容。

我们的第一道防线是在内容生成环节。因为所有帖子都在 Sora 内部生成,我们可以在生成阶段内置强有力的防护,尽量在内容产生之前阻止不安全或有害内容的生成。如果有生成内容绕过了这些防护,我们也可能阻止该内容的分享。

生成之外, Sora Feed 的设计也力求适合所有用户使用。可能有害、不安全或不适合年龄的内容,会对青少年账号进行过滤。我们使用自动化工具扫描所有 Feed 内容,以判断其是否符合 Global Usage Policies 和 Feed 的可见性标准;这些系统会随着我们对新风险认识的提升而持续更新。如果你发现疑似违反我们使用政策的内容,可以通过举报链接 report it 。

我们还辅以人工审核。团队会监控用户举报并主动检查 Feed 活动,弥补自动化可能遗漏的情况。如果你发现疑似违规的内容,同样可以 report it 。

但安全并不只是严苛过滤。过多限制会扼杀创造力,而过度宽松又会损害信任。我们的策略是:在高风险领域设立前瞻性防护,同时保留基于“举报 + 下架”的响应机制,让用户有空间探索和创作,而在出现问题时我们能迅速采取行动。这一做法已在 ChatGPT 的 4o image generation model 上取得经验,我们将在此基础上继续推进。

我们也清楚,这种平衡不会从第一天就做到完美。推荐系统和安全模型都是不断演进的系统,你的反馈对于我们改进至关重要。我们期待在使用过程中与大家一起学习并持续改进。



Principles




Our aim with the Sora feed is simple: help people learn what’s possible, and inspire them to create. Here are some of core starting principles to bring this vision to life:


  • Optimize for creativity. We’re designing ranking to favor creativity and active participation, not passive scrolling. We think this is what makes Sora joyful to use.
  • Put users in control. The feed ships with steerable ranking, so you can tell the algorithm exactly what you’re in the mood for. Parents can also turn off feed personalization and control continuous scroll for their teens through ChatGPT parental controls.
  • Prioritize connection. We want Sora to help people strengthen and form new connections, especially through fun, magical Cameo flows. Connected content will be favored over global, unconnected content.
  • Balance safety and freedom. The feed is designed to be widely accessible and safe. Robust guardrails aim to prevent unsafe or harmful generations from the start and we block content that may violate our Usage Policies. At the same time, we also want to leave room for expression, creativity, and community. We know recommendation systems are living, breathing things. As we learn from real use, we’ll adjust the details—in service of these principles.

How it works




Our recommendation algorithms are designed to give you personalized recommendations that inspire you and others to be creative. Each individual has unique interests and tastes so we’ve built a personalized system to best serve this mission.


To personalize your Sora Feed, we may consider signals like:


  • Your activity on Sora: This may include activity including your posts, followed accounts, liked and commented posts, and remixed content. It may also include the general location (such as the city) from which your device accesses Sora, based on information like your IP address.
  • Your ChatGPT data: We may consider your ChatGPT history, but you can always turn this off in Sora’s Data Controls, within Settings.
  • Content engagement signals: This may include signals such as views, likes, comments, instructions to “see less content like this,” and remixes.
  • Author signals: This may include follower count, other posts, and past post engagement.
  • Safety signals: Whether or not the post is considered violative or appropriate.

We may use these signals to predict if this content is something you may like to see and riff off of.


Parents are also able to turn off feed personalization and manage continuous scroll for their teens using parental controls in ChatGPT.


How we balance safety & expression




Keeping the Sora Feed safe and fun for everyone means walking a careful line: protect users from harmful content, while leaving enough freedom for creativity to thrive.


We may remove content that violates our Global Usage Policies. Additionally, content deemed inappropriate for users may be removed from Feed and other sharing platforms (such as user galleries and side characters) in accordance with our Sora Distribution Guidelines. This includes:


  • Graphic sexual content;
  • Graphic violence or content promoting violence;
  • Extremist propaganda;
  • Hateful content;
  • Content that promotes or depicts self harm or disordered eating;
  • Unhealthy dieting or exercise behaviors;
  • Appearance-based critiques or comparisons;
  • Bullying content;
  • Dangerous challenges likely to be imitated by minors;
  • Content glorifying depression;
  • Promotion of age-restricted goods or activities including illegal drugs or harmful substances; and
  • Low quality content where the primary purpose is engagement bait;
  • Content that recreates the likeness of living individuals without their consent, or deceased public figures in contexts where their likeness is not permitted for use;
  • Content that may infringe on the intellectual property rights of others.

Our first layer of defense is at the point of creation. Because every post is generated within Sora, we can build in strong guardrails that prevent unsafe or harmful content before it’s made. If a generation bypasses these guardrails, we may remove the sharing of that content.


Beyond generation, the feed is designed to be appropriate for all Sora users. Content that may be harmful, unsafe, or age-inappropriate is filtered out for teen accounts. We use automated tools to scan all feed content for compliance with our Global Usage Policies and feed eligibility. These systems are continuously updated as we learn more about new risks. If you see something you think does not follow our Usage policies, you can report it.


We complement this with human review. Our team monitors user reports and proactively checks feed activity to catch what automation may miss. If you see something you think does not follow our Usage Policies, you can report it.


But safety isn’t only about strict filters. Too many restrictions can stifle creativity, while too much freedom can undermine trust. We aim for a balance: proactive guardrails where the risks are highest, combined with a reactive “report + takedown” system that gives users room to explore and create while ensuring we can act quickly when problems arise. This approach has served us well in ChatGPT’s 4o image generation model, and we’re building on that philosophy here.


We also know we won’t get this balance perfect from day one. Recommendation systems and safety models are living, evolving systems, and your feedback will be essential in helping us refine them. We look forward to learning together and improving over time.



Generated by RSStT. The copyright belongs to the original author.

Source

Report Page