Unlocking the Codex harness: how we built the App Server
OpenAI NewsOpenAI 的编程代理 Codex 分布在多种使用端:网页应用( web app )、命令行界面( CLI )、IDE 扩展( IDE extension )以及新推出的 Codex macOS 应用。它们底层都由同一套 Codex harness 驱动——也就是贯穿所有 Codex 体验的代理循环与逻辑。而连接各端的关键在于 Codex App Server ,一个对客户端友好的双向 JSON-RPC API。
本文将介绍 Codex App Server ,并分享我们在将 Codex 能力嵌入产品以提升用户工作流方面的实践与经验。我们会讲解 App Server 的架构与协议、它如何与不同的 Codex 使用端集成,以及在把 Codex 打造成代码审查员、SRE 代理或编程助手时的使用建议。
App Server 的起源
在深入架构之前,了解 App Server 的发展脉络有助于理解它的设计初衷。最初,App Server 只是为了在不同产品间复用 Codex harness 而产生的实用做法,随后逐步演化为我们现在的标准协议。
最早的 Codex CLI 是一个终端用户界面( TUI ),即通过终端访问 Codex 。当我们为 VS Code 构建扩展(更适合 IDE 的交互方式)时,需要复用同一套 harness,从 IDE 界面驱动相同的代理循环而不重复实现。这要求支持超出简单请求/响应的丰富交互模式,比如浏览工作区、在代理推理时实时流式更新、以及发出 diff 等。我们曾尝试将 Codex 作为 MCP 服务器暴露,但要在 VS Code 场景下维持 MCP 语义并不容易。于是我们引入了一套与 TUI 循环相映射的 JSON-RPC 协议,它成为了 App Server 的非官方初版。当时并未预计其他客户端会依赖该服务,所以并未把它设计为稳定 API。
随着几个月内 Codex 的采用增长,内部团队和外部合作伙伴希望在自己的产品中嵌入同一套 harness,以加速用户的软件开发流程。例如 JetBrains 和 Xcode 希望提供 IDE 级的代理体验,而 Codex 桌面应用需要并行编排大量代理。这些需求推动我们设计出一个各方都能长期依赖的平台表面,它必须易于集成并向后兼容,以便在演进协议时不破坏现有客户端。
下面我们将逐步讲解如何设计架构与协议,使不同客户端都能使用同一套 harness。
Codex harness 内部
先来看看 Codex harness 内部包含什么,以及 Codex App Server 如何将其暴露给客户端。在上一篇博客中,我们拆解了协调用户、模型与工具交互的核心代理循环——这是 Codex harness 的核心逻辑,但完整的代理体验还包括其他部分:
- 线程生命周期与持久化。线程是用户与代理之间的一次 Codex 会话。 Codex 负责创建、恢复、分叉与归档线程,并将事件历史持久化,以便客户端能重连并呈现一致的时间线。
- 配置与认证。 Codex 加载配置、管理默认项,并运行诸如 “ Sign in with ChatGPT ” 的认证流程,包含凭据状态管理。
- 工具执行与扩展。 Codex 在沙箱中执行 shell/文件类工具,并将 MCP 服务器与 skills 等集成接入代理循环,使它们在统一策略模型下参与工作。
上述所有代理逻辑,包括核心代理循环,都位于 Codex CLI 代码库中的一个名为 “ Codex core ” 的部分。 Codex core 既是封装所有代理代码的库,也是一个运行时,可被启动以执行代理循环并管理单个 Codex 线程(会话)的持久化。
要让 Codex harness 对客户端可用,就需要 App Server 来承载和暴露这些能力。
App Server 既是客户端与服务器之间采用的 JSON-RPC 协议,也是一个长期运行的进程,用于托管 Codex core 线程。一个 App Server 进程主要由四个组件构成:stdio reader、 Codex message processor、线程管理器( thread manager )和核心线程( core threads )。线程管理器为每个线程启动一个核心会话, Codex message processor 则与这些核心会话直接通讯,用以提交客户端请求并接收更新。
一次客户端请求可能产生大量事件更新,这些详尽的事件正是构建丰富 UI 的基础。此外,stdio reader 与 Codex message processor 充当客户端与 Codex core 线程之间的翻译层:它们将客户端的 JSON-RPC 请求翻译为 Codex core 的操作,监听 core 的内部事件流,并把这些低级事件转换为一小组稳定且适合 UI 使用的 JSON-RPC 通知。
客户端与 App Server 之间的 JSON-RPC 协议是完全双向的。典型的线程会有一个客户端请求和许多服务器通知;同时,当代理需要用户输入(例如请求审批)时,服务器也可以发起请求并在收到客户端回复前暂停当前回合。
会话原语
我们将协议的构建模块称为“会话原语”。为代理循环设计 API 有难度,因为用户/代理的交互并非简单的请求/响应。一条用户请求可能展开成一系列结构化动作,客户端需要如实呈现这些步骤:用户输入、代理的增量进展、沿途产出的工件(如 diff)等。为便于在各种 UI 上接入并保证弹性,我们确立了三种核心原语及其清晰的生命周期:
- Item:在 Codex 中, item 是输入/输出的原子单元。 item 有类型(例如 user message、agent message、tool execution、approval request、diff 等),并有明确生命周期:
- item/started:item 开始
- 可选的 item/*/delta 事件:对于可流式的 item,其内容分片到达
- item/completed:item 以最终负载完成 这一生命周期让客户端能在 started 时立刻开始渲染、在 delta 时实时更新,并在 completed 时收尾。
- Turn:turn 是由用户输入触发的一次代理工作单元。它以客户端提交的某个输入开始(例如“运行测试并总结失败”),在代理完成为该输入生成输出时结束。一个 turn 包含若干个 item,表示中间步骤与产出。
- Thread:thread 是用户与代理之间长期会话的持久容器,包含多个 turn。线程可被创建、恢复、分叉和归档,并将历史持久化,以便客户端重连时能呈现一致的时间线。
为了便于理解,这里展示了客户端与代理之间简化的会话流程。会话开始前,客户端必须先发送一个 initialize 请求,服务器以响应确认并可在响应中宣告能力、协议版本、功能开关与默认项,从而在真正开始工作前双方就协议版本与能力达成一致。举例来说,我们在 VS Code 扩展中发送的初始化载荷会包含 clientInfo,服务器则返回包含 userAgent 字段的结果字符串,表明其能力与版本信息。
当客户端发起新请求时,通常先创建一个线程( thread/start ),随后启动一个回合( turn/start )。服务器会发送进度通知(如 thread/started、 turn/started ),并将注册为 items 的输入(例如用户消息)回传。工具调用也以 items 的形式返回客户端;此外,服务器在执行某些操作前可能需要客户端批准,这时服务器会发起请求并暂停当前 turn,直到客户端以 allow 或 deny 响应为止(如在 VS Code 扩展中的权限提示:是否允许我为该工作区运行 pnpm test?并可选择“总是允许”之类的选项)。
代理消息通常以流式 delta 事件逐步回传,最终以 item/completed 收尾,随后以 turn/completed 结束回合。若要查看完整的 turn JSON,可以运行存于 Codex CLI 开源仓库中的测试客户端命令,例如: codex debug app-server send-message-v2 "run tests and summarize failures" 。
与客户端的集成方式
下面看看不同客户端是如何通过 App Server 嵌入 Codex 的。我们总结了三类典型模式:本地应用与 IDE、 Codex Web 运行时,以及原有的 TUI。
所有这些场景的传输层均为通过 stdio(JSONL)实现的 JSON-RPC 。 JSON-RPC 便于用任意语言构建客户端绑定;目前各端与合作方已经用包括 Go、Python、TypeScript、Swift 和 Kotlin 在内的语言实现了 App Server 客户端。对于 TypeScript,可以通过从 Rust 协议生成定义来使用: codex app-server generate-ts;对于其他语言,可以生成 JSON Schema 包并交由首选代码生成器: codex app-server generate-json-schema 。
本地应用与 IDE
本地客户端通常会捆绑或下载平台专用的 App Server 可执行文件,将其作为长期子进程启动,并维持一条双向 stdio 通道用于 JSON-RPC。在我们的 VS Code 扩展与桌面应用中,随发行包一并提供了经过测试并固定版本的 Codex 二进制文件,确保客户端始终运行我们已验证的确切构件。
并非所有集成都能频繁推送客户端更新。一些合作方(如 Xcode)通过把客户端保持稳定而允许其在需要时指向新的 App Server 二进制来解耦发布时间周期;这样他们可以在不发新版客户端的情况下采用服务器端改进(比如 Codex core 更好的自动压缩或新增的配置键)并快速修复问题。 App Server 的 JSON-RPC 接口被设计为向后兼容,因此旧客户端可以安全地与新服务器通信。
Codex Web
在 Web 场景下,我们把 Codex harness 运行在容器环境中。一个 worker 会为已检出的工作区配置容器,在其中启动 App Server 二进制,并维持一条长期的 stdio 上的 JSON-RPC 通道。浏览器端(用户标签页)通过 HTTP 与 SSE 与后端通信,后端将 worker 产生的任务事件流式传回。这样既保持了浏览器端 UI 的轻量性,又在桌面与 Web 之间提供了一致的运行时。
由于 Web 会话易失(标签页关闭或网络中断),Web 应用不能成为长期任务的事实来源。将状态与进度保存在服务器端意味着即便标签页消失,工作也能继续。流式协议与已保存的线程会话使得新会话可以方便地重连并继续之前的工作,而无需在客户端重建全部状态。
TUI / Codex CLI
历史上, TUI 是一种“本地”客户端,它与代理循环运行在同一进程中,并直接与 Rust 的核心类型对话,而不是使用 app-server 协议。这固然加快了早期迭代,但也让 TUI 成为一个特殊用例。
现在有了 App Server,我们计划将 TUI 重构为使用 App Server 的客户端(参见相关 PR),使其像其他任何客户端一样:启动 App Server 子进程,通过 stdio 使用 JSON-RPC ,并渲染相同的流式事件与审批界面。这将解锁诸如让 TUI 连接到运行在远端机器上的 Codex 服务、将代理尽量靠近计算资源以便在笔记本睡眠或断网时仍能继续工作,而本地仍能收到实时更新和控制的工作流。
选择合适的协议
未来 Codex App Server 将是我们维护的首选集成方式,但也存在其他功能更有限的方法。默认建议客户端使用 Codex App Server 来集成 Codex 。下面简要说明常见几种调用 Codex 的方式及其适用场景与权衡:
- 使用 JSON-RPC 协议(即 Codex App Server ),当你需要完整的 Codex harness 并希望获得一个稳定、对 UI 友好的事件流时,这是首选。它能提供完整的代理循环功能以及诸如 Sign in with ChatGPT、模型发现与配置管理等配套功能。代价在于需要做一定的集成工作(为你的语言实现 JSON-RPC 绑定),但在实践中,如果把 JSON Schema 和文档交给 Codex,许多团队能很快得到可工作的集成。
- 将 Codex 作为 MCP 服务器运行(运行 codex mcp-server 并从支持 stdio 服务器的任何 MCP 客户端连接,如 OpenAI Agents SDK),适合已有 MCP 流程并希望把 Codex 当作可调用工具的场景。但缺点是受限于 MCP 所暴露的能力,某些依赖更丰富会话语义的 Codex 特性(例如 diff 更新)可能无法通过 MCP 端点充分表达。
- 跨厂商的 agent harness 协议,适合希望用一个抽象层来协调多家模型提供方与运行时的场景。但这类协议通常会集中在各方共同的能力子集,导致难以表示更丰富的交互,尤其是在需要保留提供方特有工具与会话语义时。该领域发展迅速,未来可能出现更广泛采用的标准(例如 skills 就是一个相关示例)。
其他嵌入 Codex 的方式
- codex exec:一种轻量、脚本化的 CLI 模式,适用于一次性任务与 CI 运行,适合在非交互式情形下一次运行完成、以结构化输出便于日志记录并返回清晰成功/失败信号的场景。
- Codex SDK:一个 TypeScript 库,用于在应用内以编程方式控制本地 Codex 代理。若你希望在服务器端工具或工作流中使用原生库接口而不想构建单独的 JSON-RPC 客户端, SDK 是较好选择。由于它比 App Server 更早推出,当前支持的语言和功能面较少;若开发者有需求,我们可能会出品包装 App Server 协议的额外 SDK,以便团队在无需自行实现 JSON-RPC 绑定的情况下覆盖更多 harness 功能。
展望
本文介绍了我们在为代理交互设计新标准时的思路,以及如何将 Codex harness 打造成一个稳定、对客户端友好的协议。我们讲解了 App Server 如何暴露 Codex core、让客户端驱动完整代理循环,并支撑包括 TUI、本地 IDE 集成与 Web 运行时在内的多种使用端。
如果这些内容启发了你将 Codex 嵌入自家工作流,建议试试 App Server 。相关源代码全部开源于 Codex CLI 仓库(参见仓库中的 app-server README)。欢迎反馈与功能请求——我们期待与你们的互动,也将持续把代理能力做得更易用并覆盖更广的场景。
OpenAI’s coding agent Codex exists across many different surfaces: the web app, the CLI, the IDE extension, and the new Codex macOS app. Under the hood, they’re all powered by the same Codex harness—the agent loop and logic that underlies all Codex experiences. The critical link between them? The Codex App Server, a client-friendly, bidirectional JSON-RPC1 API.
In this post, we’ll introduce the Codex App Server; we’ll share our learnings so far on the best ways to bring Codex’s capabilities into your product to help your users supercharge their workflows. We’ll cover the App Server’s architecture and protocol and how it integrates with different Codex surfaces, as well as tips on leveraging Codex, whether you want to turn Codex into a code reviewer, an SRE agent, or a coding assistant.
Origin of the App Server
Before diving into architecture, it’s helpful to know the App Server’s backstory. Initially, the App Server was a practical way to reuse the Codex harness across products that gradually evolved into our standard protocol.
Codex CLI started as a TUI (terminal user interface), meaning Codex is accessed through the terminal. When we built the VS Code extension (a more IDE-friendly way to interact with Codex agents), we needed a way to use the same harness so as to drive the same agent loop from an IDE UI without re-implementing it. That meant supporting rich interaction patterns beyond request/response, such as exploring the workspace, streaming progress as the agent reasons, and emitting diffs. We first experimented with exposing Codex as an MCP server, but maintaining MCP semantics in a way that made sense for VS Code proved difficult. Instead, we introduced a JSON-RPC protocol that mirrored the TUI loop, which became the unofficial first version of the App Server. At the time, we didn’t expect other clients to depend on the App Server, so it wasn’t designed as a stable API.
As Codex adoption grew over the next few months, internal teams and external partners wanted the ability to embed the same harness in their own products in order to accelerate their users’ software development workflows. For example, JetBrains and Xcode wanted an IDE-grade agent experience, while the Codex desktop app needed to orchestrate many Codex agents in parallel. Those demands pushed us to design a platform surface that both our products and partner integrations could safely depend on over time. It needed to be easy to integrate and backward compatible, meaning we could evolve the protocol without breaking existing clients.
Next, we’ll walk through how we designed the architecture and protocol so different clients can use the same harness.
Inside the Codex harness
First, let’s zoom in on what’s inside the Codex harness and how the Codex App Server exposes it to clients. In our last Codex blog, we broke down the core agent loop that orchestrates the interaction between the user, the model, and the tools. This is the core logic of the Codex harness, but there’s more to the full agent experience:
1. Thread lifecycle and persistence. A thread is a Codex conversation between a user and an agent. Codex creates, resumes, forks, and archives threads, and persists the event history so clients can reconnect and render a consistent timeline.
2. Config and auth. Codex loads configuration, manages defaults, and runs authentication flows like “Sign in with ChatGPT,” including credential state.
3. Tool execution and extensions. Codex executes shell/file tools in a sandbox and wires up integrations like MCP servers and skills so they can participate in the agent loop under a consistent policy model.
All the agent logic we mentioned here, including the core agent loop, lives in a part of the Codex CLI codebase called “Codex core.” Codex core is both a library where all the agent code lives and a runtime that can be spun up to run the agent loop and manage the persistence of one Codex thread (conversation).
To be useful, the Codex harness needs to be accessible to clients. That’s where the App Server comes in.
The App Server is both the JSON-RPC protocol between the client and the server and a long-lived process that hosts the Codex core threads. As we can see from the diagram above, an App Server process has four main components: the stdio reader, the Codex message processor, the thread manager, and core threads. The thread manager spins up one core session for each thread, and the Codex message processor then communicates with each core session directly to submit client requests and receive updates.
One client request can result in many event updates, and these detailed events are what allow us to build a rich UI on top of the App Server. Furthermore, the stdio reader and the Codex message processor serve as the translation layer between the client and Codex core threads. They translate client JSON-RPC requests into Codex core operations, listen to Codex core’s internal event stream, and then transform those low-level events into a small set of stable, UI-ready JSON-RPC notifications.
The JSON-RPC protocol between the client and the App Server is fully bidirectional. A typical thread has a client request and many server notifications. In addition, the server can initiate requests when the agent needs input, like an approval, and then pause the turn until the client responds.
The conversation primitives
Next, we’ll break down the conversation primitives, the building blocks of the App Server protocol. Designing an API for an agent loop is tricky because the user/agent interaction is not a simple request/response. One user request can unfold into a structured sequence of actions that the client needs to represent faithfully: the user’s input, the agent’s incremental progress, artifacts produced along the way (e.g., diffs). To make that interaction stream easy to integrate and resilient across UIs, we landed on three core primitives with clear boundaries and lifecycles:
1. Item: An item is the atomic unit of input/output in Codex. Items are typed (e.g., user message, agent message, tool execution, approval request, diff) and each has an explicit lifecycle:
item/startedwhen the item begins- optional
item/*/deltaevents as content streams in (for streaming item types) item/completedwhen the item finalizes with its terminal payload
This lifecycle lets clients start rendering immediately on started, stream incremental updates on delta, and finalize on completed.
2. Turn: A turn is one unit of agent work initiated by user input. It begins when the client submits an input (for example, “run tests and summarize failures”) and ends when the agent finishes producing outputs for that input. A turn contains a sequence of items that represent the intermediate steps and outputs produced along the way.
3. Thread: A thread is the durable container for an ongoing Codex session between a user and an agent. It contains multiple turns. Threads can be created, resumed, forked, and archived. Thread history is persisted so clients can reconnect and render a consistent timeline.
Now, we’ll look at a simplified conversation between a client and an agent, where the conversation is represented by primitives:
At the beginning of the conversation, the client and the server need to establish the initialize handshake. The client must send a single initialize request before any other method, and the server acknowledges with a response. This gives the server a chance to advertise capabilities and lets both sides agree on protocol versioning, feature flags, and defaults before the real work begins. Here’s an example payload from OpenAI’s VS Code extension:
JSON
1{
2 "method": "initialize",
3 "id": 0,
4 "params": {
5 "clientInfo": {
6 "name": "codex_vscode",
7 "title": "Codex VS Code Extension",
8 "version": "0.1.0"
9 }
10 }
11}
This is what the server returns:
JSON
1{
2 "id": 0,
3 "result": {
4 "userAgent": "codex_vscode/0.94.0-alpha.7 (Mac OS 26.2.0; arm64) vscode/2.4.22 (codex_vscode; 0.1.0)"
5 }
6}
When a client makes a new request, it will first create a thread and then a turn. The server will send back notifications for progress (thread/started and turn/started). It will also send back inputs it registers as items, like the user message here.
Tool calls are also sent back to the client as items. Additionally, the server may ask for client approval before it can run an action by sending a server request. The approval will pause the turn until the client replies with either “allow” or “deny.” This is what the approval flow looks like in the VS Code extension:

In the end, the server sends an agent message and then ends the turn with turn/completed. The agent message delta events stream pieces of the message back until the message is finalized with item/completed.
The messages in the diagram are simplified for readability. If you want to see the JSON for a full turn, you can run the test client from the Codex CLI repo:
Bash
1codex debug app-server send-message-v2 "run tests and summarize failures"
Integrating with clients
Now, let’s look at how different client surfaces embed Codex via the App Server. We’ll cover three patterns: local apps and IDEs, Codex web runtime, and the TUI.
Across all three, the transport is JSON-RPC over stdio (JSONL). JSON-RPC makes it straightforward to build client bindings in the language of your choice. Codex surfaces and partner integrations have implemented App Server clients in languages including Go, Python, TypeScript, Swift, and Kotlin. For TypeScript, you can generate definitions directly from the Rust protocol by running:
Bash
1codex app-server generate-ts
For other languages, you can generate a JSON Schema bundle and feed it into your preferred code generator by running:
Bash
1codex app-server generate-json-schema
Local Apps & IDEs

Local clients typically bundle or fetch a platform-specific App Server binary, launch it as a long-running child process, and keep a bidirectional stdio channel open for JSON-RPC. In our VS Code extension and Desktop App, for example, the shipped artifact includes the platform-specific Codex binary and is pinned to a tested version so the client always runs the exact bits we validated.
Not every integration can ship client updates frequently. Some partners like Xcode decouple release cycles by keeping the client stable and allowing it to point to a newer App Server binary when needed. That way they can adopt server-side improvements (for example, better auto-compaction in Codex core or newly supported config keys) and roll out bug fixes without waiting for a client release. The App Server’s JSON-RPC surface is designed to be backward compatible, so older clients can talk to newer servers safely.
Codex Web

Codex Web uses the Codex harness, but runs it in a container environment. A worker provisions a container with the checked-out workspace, launches the App Server binary inside it, and maintains a long-lived JSON-RPC over stdio2 channel. The web app (running in the user’s browser tab) talks to the Codex backend over HTTP and SSE, which streams task events produced by the worker. This keeps the browser-side UI lightweight while still giving us a consistent runtime across desktop and web.
Because web sessions are ephemeral (tabs close, networks drop), the web app cannot be the source of truth for long-running tasks. Keeping state and progress on the server means work continues even if the tab disappears. The streaming protocol and saved thread sessions make it easy for a new session to reconnect, pick up where it left off, and catch up without rebuilding state in the client.
TUI/Codex CLI

Historically, the TUI was a “native” client that ran in the same process as the agent loop and talked directly to Rust core types rather than the app-server protocol. That made early iteration fast, but it also made the TUI a special-case surface.
Now that the App Server exists, we plan to refactor the TUI to use it so it behaves like any other client: launch an App Server child process, speak JSON-RPC over stdio, and render the same streaming events and approvals. This unlocks workflows where the TUI can connect to a Codex server running on a remote machine, keeping the agent close to compute and continuing work even if the laptop sleeps or disconnects, while still delivering live updates and controls locally.
Choosing the right protocol
Codex App Server will be the first-class integration method we maintain moving forward, but there are also other methods with more limited functionality. By default, we’d recommend that clients use Codex App Server to integrate with Codex, but it’s worth taking a look at different integration methods and understanding their pros and cons. Below are the most common ways to drive Codex and when each might be a good fit.
JSON-RPC protocols
Codex as an MCP server
Run codex mcp-server and connect from any MCP client that supports stdio servers (e.g., OpenAI Agents SDK). This is a good fit if you already have an MCP-based workflow and want to invoke Codex as a callable tool. The downside is that you only get what MCP exposes, so Codex-specific interactions that rely on richer session semantics (e.g., diff updates) may not map cleanly through MCP endpoints.
Cross-provider agent harness protocols
Some ecosystems offer a portable interface that can target multiple model providers and runtimes. This can be a good fit if you want one abstraction that coordinates multiple agents. The tradeoff is that these protocols often converge on the common subset of capabilities, which can make richer interactions harder to represent, especially when provider-specific tool and session semantics matter. This space is evolving quickly, and we expect that more common standards will emerge as we figure out the best primitives to represent real-world agent workflows (skills is a good example of this).
Codex App Server
Choose the App Server when you want the full Codex harness exposed as a stable, UI-friendly event stream. You get both the full functionality of the agent loop and other supporting features like Sign in with ChatGPT, model discovery, and configuration management. The main cost is integration work, since you need to build the client-side JSON-RPC binding in your language. In practice, however, Codex is able to do a lot of the heavy lifting if you feed it the JSON schema and documentation. Many teams we worked with were able to make to a working integration quickly using Codex.
Other ways to Embed Codex
A lightweight, scriptable CLI mode for one-off tasks and CI runs. It’s a good fit for automation and pipelines where you want a single command to run to completion non-interactively, stream structured output for logs, and exit with a clear success or failure signal.
A TypeScript library for programmatically controlling local Codex agents from within your own application. It’s best when you want a native library interface for server-side tools and workflows without building a separate JSON-RPC client. Since it shipped earlier than the App Server, it currently supports fewer languages and a smaller surface area. If there is developer interest, we may add additional SDKs that wrap the App Server protocol so teams can cover more of the harness surface without writing JSON-RPC bindings.
Taking this forward
In this post, we shared how we approach designing a new standard for interacting with agents and how to turn the Codex harness into a stable, client-friendly protocol. We covered how the App Server exposes Codex core, lets clients drive the full agent loop, and powers a wide range of surfaces including the TUI, local IDE integrations, and the web runtime.
If this sparked ideas for integrating Codex into your own workflows, it’s worth giving App Server a try. All the source code lives in the Codex CLI open-source repo. Feel free to share your feedback and feature requests. We’re excited to hear from you and to keep making agents more accessible to everyone.
Generated by RSStT. The copyright belongs to the original author.