BNY builds “AI for everyone, everywhere” with OpenAI

BNY builds “AI for everyone, everywhere” with OpenAI

OpenAI News

当 ChatGPT 于 2022 年底 推出时, BNY 果断决定在全企业范围内拥抱 generative AI 。公司没有把试验局限于少数技术人员,而是建立了集中化的 AI Hub ,推出了名为 Eliza 的内部 AI 部署与教育平台,并对员工进行了负责任使用 AI 的培训。

“我们的口号是‘ AI 为每个人、随处可用、融入一切’,” BNY 的首席数据与 AI 官 Sarthak Pattanaik 表示。“这项技术改变性太大了,我们决定用平台化的方法来落地。”

该平台目前支撑着 125 多个上线应用场景,有 2 万名员工在积极构建 agents。

从一开始, Eliza 就被设计成不仅是一个工具,而是一套工作体系,将 BNY 的治理严谨性与领先模型(包括 OpenAI 的最前沿模型)结合,帮助员工在安全可信的环境中构建应用。

“我们不是在做边缘项目,” Pattanaik 说,“我们是在改变银行的运作方式。”

维系系统性重要机构的信任

作为在全球经济中具有系统性重要性的机构, BNY 在 100 多个市场管理、转移并保护资产、数据和现金。作为全球最大的金融机构之一,其托管和/或管理的资产超过 57.8 万亿美元,信任是不可妥协的。

“我们很像全球金融服务生态系统的循环系统,” Pattanaik 说,“从这个角度看,必须把信任体现在我们所做的一切中。”

承担这种级别的责任,部署 AI 不能是事后的想法或旁支实验。 BNY 需要一种在创新与问责之间取得平衡的方法。

“很多人可能会说,你们责任这么重,或许先观望 AI 会怎样,” Pattanaik 说,“但我们认为 AI 将成为未来技术的操作系统。”

通过“以治理为设计”实现可扩展且安全的 AI

Eliza 成功的关键在于其治理模式既支持规模化,又不妨碍试验速度。“有些人把 AI 治理看作障碍,但我们的经验是,它是推动器,”副总法律顾问兼首席技术法律顾问 Watt Wanapha 说。“良好的治理让我们能走得更快。”

在 BNY,有若干跨学科小组定期会面,审议新的 AI 使用场景:

  • data use review board:召集知识产权、网络安全、工程、数据、隐私、第三方关系等跨职能负责人。
  • Artificial Intelligence release board:类似团队加上其他群组,在部署到生产前重新审视项目。
  • Enterprise AI Council:为全行提供高层监督和政策对齐。

data use review board 的洞见每天流向 AI Council,后者评估高影响或新颖情形。 Wanapha 指出:“我们必须边做边迭代。随着用例扩展、模型更迭,我们得不断评估 AI 项目以维持准确性。”

BNY 的与众不同之处在于将治理完全集成到工具中。在 Eliza 内,所有提示、agent 开发、模型选择与共享都在受治理的环境里进行。

“ Eliza 在系统层面嵌入了治理,” Wanapha 解释道。“它在所有模型和工具间标准化权限、安全与监督,确保每个工作流都达到同样的保护水平。”

通过培训与社区赋能全体员工

在 BNY,治理不仅是监管——它决定了员工每天如何使用 AI 。 Eliza 在设计上强制执行负责任使用:所有员工在使用前必须完成强制性培训,这一基础还通过额外培训、工具、挑战与社区支持不断强化。公司目前已有近 99% 的员工完成了关于 Gen AI 的培训,并提供更多进阶能力培养机会。

“我们推出了多种学习方案,按需匹配员工不同起点,帮助大家共同前进,”全球人才主管 Michelle O’Reilly 说。

一项亮眼举措是“ Make AI a Habit Month ”,这是一个由每日七分钟训练组成的系列,旨在提升提示设计、agent 构建和同侪分享的信心。 O’Reilly 指出:“这个月后,员工构建的 agents 数量增长了 46%。”

这种赋能模式催生了更广泛的文化变革。“人们觉得自己有能力自行解决问题,” Pattanaik 说,“我们看到团队运作方式发生了转变。”

这种文化在全行黑客松等活动中体现得很明显,法律、销售与工程团队并肩作战。“我们在一次销售黑客松上没有 IT 或技术人员在场,但每个人都感觉自己像个开发者,”销售与客户关系管理负责人 Ed Fandrey 说。

从早期用例学习中释放全行影响力

在 Eliza、 AI Hub 与各业务部门合作构建的首批 agents,展示了团队将想法迅速转化为效益的能力:

  • Contract Review Assistant:将法律审查时间从 4 小时缩短到 1 小时,减省 75%,适用于每年 3000 多份供应商协议。
  • People Business Partner Agent:快速回答福利与政策问题,减少人工请求并提升一致性与准确性。

这些早期项目触发了协作方式的改变。 O’Reilly 说:“以前协作意味着更多会议;现在是一起试验、分享提示、测试 agents、边做边学。”这种心态形成了创新的正向循环,一个团队的 agent 往往成为另一个团队的基础。

Eliza 以受控自治为出发点,最初只允许私有 agent 构建。现在,某些团队与角色创建的 agents 可与最多十位同事共享,促进复用与扩展。结果是:超过 125 个 AI 工具在各大业务线投入生产,包括:

  • Lead Recommendation Engine:生成可用于与客户讨论的洞察与机会。
  • Metrics Agent:在具权限意识的前提下,总结学习平台的使用与绩效。
  • Risk Insights Agent:通过深度调研揭示投资组合中出现的新兴风险信号,帮助分析师在问题升级前采取行动。

Eliza 还引入了高级 AI agents 的概念—— BNY 所称的 “ digital employees ”,具备身份、访问控制与专属工作流。数字员工处理从付款指令校验到代码安全增强的一系列任务。

“现在,人类操作员的角色不再是首要承担某些任务,而是担当数字员工的训练者或培育者,” Pattanaik 说。

用深度研究与 agents 将企业知识转为自主工作流

BNY 内一个精选小组正在试验 ChatGPT Enterprise ,为团队提供深度研究等能力,探索与 AI 合作的新工作方式。

深度研究能跨内部与外部数据进行多步推理,赋能如风险建模、情景规划与战略决策等用例。

“我每天都在用它,”副总法律顾问 Watt Wanapha 说,“遇到新颖的法律问题时,我把深度研究当作思考伙伴,帮我判断是否有我没问到的问题。”

对面向客户的团队而言,深度研究也在改变他们备战对话与战略规划的方式。配合 agents,这些洞见可以即时被执行,触发后续行动、起草外联内容,或直接在客户系统中安排下一步。

与 Eliza 的编排层结合,这些进展构成了以权限、监督与遥测为核心的自主数字员工的基础。而下一个前沿已在视野中。

“我们正在超越知识抽取与推理阶段,” Pattanaik 说,“关键是把组织内部的点连成线,创新新产品,为客户提供个性化服务。”

给 AI 领导者的经验:内建治理,而非外加

BNY 的治理策略为在安全环境中推进企业级 AI 的团队提供了可借鉴的蓝图:

  • 借用既有风险框架:与其从零创建生成式 AI 专属治理,不如把成熟的法律与合规流程扩展到新用例。
  • 建立共同责任:跨职能委员会实时审查 AI 用例,确保领域特定风险被纳入考量。
  • 让治理可见且可触达: Eliza 的界面强制执行标记、遥测、审批流与访问控制——无需用户额外手动操作。
  • 投资于文化与一致性:近 99% 的员工已完成负责任 AI 的培训并获得 Eliza 访问权限。 Wanapha 指出:“如果你不了解 AI 和平台的运作,就无法真正评估风险与可能性。”
  • 选择合适的合作伙伴:“在 AI 领域,我们都在面对未被回答的新问题,” Wanapha 说,“因此拥有合适的合作伙伴和开放的沟通渠道非常重要。”

内部问责与外部合作并行,仍然是增长的关键推动力。“这是一个很好的组合,” Pattanaik 说,“既有 OpenAI 的研究,又有 BNY 有目的的业务场景。”

用高级智能赋能你的机构

了解 OpenAI 如何帮助你的组织在安全与负责任的前提下扩展 AI 能力。联系销售。



When ChatGPT launched in late 2022, BNY made a decisive move to embrace generative AI across the enterprise. Rather than limiting experimentation to a few technologists, the firm created a centralized AI Hub, launched an internal AI deployment and education platform called Eliza, and trained its employees on responsible AI use. 


“Our mantra is ‘AI for everyone, everywhere, and in everything,’” says Sarthak Pattanaik, Chief Data and AI Officer at BNY. “This technology is too transformative, and we decided to take a platform-based approach for execution.”


That platform now supports over 125 live use cases, with 20,000 employees actively building agents.


From its start, Eliza was designed not just as a tool, but as a system of work, pairing BNY’s governance rigor with leading models—including OpenAI frontier models—to help employees build safely and confidently. 


“We’re not building side projects,” Pattanaik says. “We’re changing how the bank works.”


Maintaining trust in a systemically important institution




BNY plays a systemically important role in the global economy, managing, moving, and safeguarding assets, data, and cash across more than 100 markets. As one of the world’s largest financial institutions, with more than $57.8 trillion in assets under custody and/or administration, trust is non-negotiable. 


“We are much like the circulatory system of the global financial services ecosystem,” says Pattanaik. “And from that perspective, we must ensure trust is built into everything we do.”


With that level of responsibility, deploying AI couldn’t be an afterthought or a side experiment. BNY needed an approach that balanced innovation with accountability.


“A lot of folks could have said, you have such a huge responsibility - maybe we’ll wait and see what happens with AI,” says Pattanaik. “We believe AI is going to be like the operating system of technology going forward.”










Scaling AI safely through governance by design




Key to Eliza’s success is a governance model that supports scale without slowing experimentation. “Some might see AI governance as a barrier, but in our experience, it’s been an enabler,” says Watt Wanapha, Deputy General Counsel and Chief Technology Counsel. “Good governance has allowed us to move much more quickly.”


At BNY, there are several cross-disciplinary groups that meet regularly to review and consider new AI use cases:


  • A data use review board, which brings together cross-functional leaders in intellectual property rights, cybersecurity, engineering, data, privacy, third-party relationships, and others.
  • An Artificial Intelligence release board, which aligns similar teams plus additional groups to reconsider initiatives before they are deployed into production. 
  • The Enterprise AI Council, providing senior oversight and policy alignment across the firm.

Insight from the data use review board flows daily to the AI Council, which then evaluates high-impact or novel scenarios. “We had to iterate as we went along,” Wanapha notes. “As our use cases expand, and as the models shift, we have to constantly evaluate AI projects to maintain accuracy.” 


What makes BNY’s approach different is how governance is fully integrated into the tooling. Within Eliza, all prompting, agent development, model selection, and sharing happens inside a governed environment. 


“Eliza embeds governance at the system level,” Wanapha explains. “It standardizes permissions, security, and oversight across all models and tools, ensuring every workflow meets the same level of protection.”


Empowering every employee through training and community 




At BNY, governance isn't just about oversight - it’s how employees engage with AI every day. Eliza enforces responsible use by design. All employees complete mandatory training before they can use it, and that foundation is reinforced with additional trainings, tools, challenges, and community support. The company now has 99% of its workforce trained on Gen AI, with many more advanced enablement opportunities available. 


“We introduced a number of different learning solutions to meet people where they are and to bring them along on the journey,” says Michelle O’Reilly, Global Head of Talent.


One standout initiative: Make AI a Habit Month, a daily series of seven-minute trainings designed to build confidence in prompting, agent building, and peer sharing. “From this month, we saw a 46% increase in the number of agents people were building,” notes O’Reilly.


This enablement model has unlocked a broader cultural shift. “People feel empowered to solve problems themselves,” says Pattanaik. “We’re seeing a culture shift in how teams operate.” 


That culture shows up in events like bank-wide hackathons, where teams from Legal, Sales, and Engineering build side-by-side. “We had a recent hackathon in Sales,” says Ed Fandrey, Head of Sales and Relationship Management. “There were no IT or tech folks present, but everyone felt like a developer.”










Unlocking firmwide impact from early use case learnings 




The first wave of agents built in Eliza, in collaboration with the AI Hub and different BNY departments, showed how quickly teams could turn ideas into impact:


  • Contract Review Assistant: Reduces legal review time by 75%, from four hours to one, across 3,000+ annual vendor agreements each year.
  • People Business Partner Agent: Provides fast answers about benefits and policies, cutting manual requests and improving consistency and accuracy.

These early projects sparked a cultural shift. “Before, collaboration meant more meetings,” says O’Reilly. “Today, it means experimenting together, sharing prompts, testing agents, and learning by doing.” That mindset created a flywheel of innovation, with one team’s agent often becoming another’s foundation.


Built for controlled autonomy, Eliza initially allowed only private agent builds. Now, agents created by certain teams and roles can be shared with up to ten colleagues, fueling reuse and scale. The result: more than 125 AI tools in production across every major business line, including:


  • Lead Recommendation Engine: Generates insights and opportunities that are relevant to propose and discuss with a client.
  • Metrics Agent: Summarizes learning platform usage and performance with permission-aware access.
  • Risk Insights Agent: Uses deep research to surface emerging risk signals across portfolios, helping analysts act before issues escalate.

Eliza also introduced the concept of advanced AI agents—what BNY calls “digital employees”—with identities, access controls, and dedicated workflows. Digital employees handle everything from payment instruction validation to code security enhancements. 


“Now, instead of handling certain tasks in the first instance, the role of the human operator is to be the trainer or the nurturer of the digital employee,” Pattanaik says.











Turning enterprise knowledge into autonomous workflows with deep research and agents




A select group at BNY is experimenting with ChatGPT Enterprise, equipping teams with capabilities like deep research to explore new ways of working with AI. 


Deep research enables multi-step reasoning across internal and external data, powering use cases like risk modeling, scenario planning, and strategic decision-making. 


“I use it daily,” says Watt Wanapha, Deputy General Counsel. “If I’m tackling a novel legal question, I use deep research as my thought partner to help me evaluate whether there are  questions I’m not asking.”


For client-facing teams, deep research is also reshaping how they prepare for conversations and strategic planning. Paired with agents, those insights could be acted on instantly, triggering follow-ups, drafting outreach, or scheduling next steps directly within client systems. 


Together with Eliza’s orchestrator layer, these advancements form the foundation for autonomous digital employees built with permissioning, oversight, and telemetry at the core. And the next frontier is already in view. 


“We continue to mature beyond knowledge extraction and reasoning,” says Pattanaik. “It’s about connecting the dots across the organization to innovate on new products, personalized for our clients.”


Lessons for AI leaders: Build it in, don’t bolt it on




BNY’s governance strategy offers a blueprint for enterprise AI teams navigating secure environments:


  • Leverage existing risk frameworks: Instead of creating generative AI-specific governance from scratch, BNY extended its mature legal and compliance processes to cover new use cases.
  • Create shared responsibility: Cross-functional councils review AI use cases, ensuring domain-specific risks are considered in real-time.
  • Make governance visible and accessible: Eliza’s interface enforces tagging, telemetry, approval flows, and access controls - without burdening end users with manual steps.
  • Invest in culture and consistency: Nearly 99% of employees have completed responsible AI training and received Eliza access. “Unless you already know how the AI and how the platform works, you're not going to be able to really think about the risks and also the possibilities,” Wanapha notes.
  • Build with the right partner: “With AI, we are all encountering new questions that have not been answered,” says Wanapha. “So it's very important to have the right partner and an open channel of communication.”

The combination of in-house accountability and external partnership continues to be a key enabler of growth. “It’s a great mix,” says Pattanaik, “of the research OpenAI provides and the purposeful business case BNY provides.”



Power your institution with advanced intelligence

See how OpenAI can help your organization scale AI securely and responsibly.
Contact sales




Generated by RSStT. The copyright belongs to the original author.

Source

Report Page