AWS and OpenAI announce multi-year strategic partnership
OpenAI News这项为期多年的战略合作将使得双方能够立即并持续扩展运算能力:价值约 380 亿美元的合作,赋予 OpenAI 立刻且逐步增长地使用 AWS 世界级基础设施来处理其先进 AI 工作负载的能力。根据协议, OpenAI 将取得由数十万块最先进 NVIDIA GPU 组成的 AWS 计算资源,并能扩展到数千万个 CPU ,以迅速放大需要自治/代理能力的工作负载。对大规模、安全、可靠地运行 AI 基础设施有着非凡经验的 AWS ,其集群规模曾超过 50 万芯片;二者的结合将有助于数以百万计的用户继续从 ChatGPT 等产品中受益。
AI 技术的快速进步催生了前所未有的算力需求。为了将模型推向更高的智能边界,领先模型提供方越来越依赖 AWS ,因为其在性能、规模与安全性上的能力可满足需求。作为合作的一部分, OpenAI 将立刻开始使用 AWS 的计算资源;所有分配的容量目标是在 2026 年底前部署到位,并有能力在 2027 年及以后继续扩展。
AWS 为 OpenAI 构建的基础设施采用了为 AI 处理效率和性能最大化而优化的复杂架构。通过在同一网络上用 Amazon EC2 UltraServers 将 NVIDIA 的 GB200s 与 GB300s GPU 集群互联,系统能实现低延迟的跨节点性能,使 OpenAI 能以更高效的方式运行任务。该集群既能承担为 ChatGPT 提供推理服务,也能用于训练下一代模型,并具备随 OpenAI 需求演进调整的灵活性。
“扩展前沿 AI 需要海量且可靠的算力,” OpenAI 联合创始人兼 CEO Sam Altman 表示,“我们与 AWS 的合作强化了支撑下一阶段的广泛算力生态,将先进 AI 带给更多人。”
“随着 OpenAI 不断拓展可能性边界, AWS 的一流基础设施将成为其 AI 理想的支柱,” AWS CEO Matt Garman 表示。“经优化的算力种类齐全、可立即调配,这正凸显了 AWS 在支持 OpenAI 庞大 AI 工作负载方面的独特优势。”
这项消息延续了两家公司此前在将前沿 AI 技术推广到全球组织方面的协作。今年早些时候, OpenAI 的开放权重基础模型已登陆 Amazon Bedrock ,为数百万 AWS 客户提供了更多模型选项。 OpenAI 在 Amazon Bedrock 上迅速成为最受欢迎的公共模型提供方之一,成千上万客户——包括 Bystreet 、 Comscore 、 Peloton 、 Thomson Reuters 、 Triomics 和 Verana Health ——正使用其模型开展代理型工作流、代码开发、科学分析、数学问题求解等应用。
Key takeaways:
- The multi-year, strategic partnership empowers OpenAI with immediate and increasing access to AWS’s world-class infrastructure for their advanced AI workloads.
- AWS to provide OpenAI with Amazon EC2 UltraServers, featuring hundreds of thousands of chips, and the ability to scale to tens of millions of CPUs for its advanced generative AI workloads.
- Representing a $38B commitment, OpenAI will rapidly expand compute capacity while benefitting from the price, performance, scale, and security of AWS.
Today, Amazon Web Services (AWS) and OpenAI announced a multi-year, strategic partnership that provides AWS’s world-class infrastructure to run and scale OpenAI’s core artificial intelligence (AI) workloads starting immediately. Under this new $38 billion agreement, which will have continued growth over the next seven years, OpenAI is accessing AWS compute comprising hundreds of thousands of state-of-the-art NVIDIA GPUs, with the ability to expand to tens of millions of CPUs to rapidly scale agentic workloads. AWS has unusual experience running large-scale AI infrastructure securely, reliably, and at scale–with clusters topping 500K chips. AWS's leadership in cloud infrastructure combined with OpenAI's pioneering advancements in generative AI will help millions of users continue to get value from ChatGPT.
The rapid advancement of AI technology has created unprecedented demand for computing power. As frontier model providers seek to push their models to new heights of intelligence, they are increasingly turning to AWS due to the performance, scale, and security they can achieve. OpenAI will immediately start utilizing AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026, and the ability to expand further into 2027 and beyond.
The infrastructure deployment that AWS is building for OpenAI features a sophisticated architectural design optimized for maximum AI processing efficiency and performance. Clustering the NVIDIA GPUs—both GB200s and GB300s—via Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models, with the flexibility to adapt to OpenAI's evolving needs.
“Scaling frontier AI requires massive, reliable compute," said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
“As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions,” said Matt Garman, CEO of AWS. “The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads."
This news continues the companies’ work together to provide cutting-edge AI technology to benefit organizations worldwide. Earlier this year, OpenAI open weight foundation models became available on Amazon Bedrock, bringing these additional model options to millions of customers on AWS. OpenAI has quickly become one of the most popular publicly available model providers in Amazon Bedrock with thousands of customers—including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health—working with their models for agentic workflows, coding, scientific analysis, mathematical problem-solving, and more.
To get started with OpenAI’s open weight models in Amazon Bedrock, visit: aws.amazon.com/bedrock/openai
Generated by RSStT. The copyright belongs to the original author.