Nanobot + Codex App Server: A Candid Assessment
The Question
Can nanobot (the MCP agent framework) integrate with OpenAI’s Codex CLI app-server? And is it worth doing?
After deep research into both systems, here’s my candid answer.
What Nanobot Actually Is
Nanobot is an MCP agent framework. Its core architecture:
- Language model — the reasoning engine
- MCP servers — for contextual data and tool access
- Standardized UI layer — for chat interfaces
The key insight: nanobot is designed to consume MCP servers, not be one. You configure it with a nanobot.yaml or directory structure that defines which MCP servers to connect to.
# Example nanobot config
agents:
main:
model: gpt-4
mcp_servers:
- filesystem
- github
What Codex CLI Offers
Codex CLI has three integration modes:
| Mode | Transport | Best For |
|---|---|---|
codex exec | Process spawn | One-shot tasks, CI pipelines |
codex mcp-server | stdio JSON-RPC | MCP ecosystem integration |
codex app-server | WebSocket JSON-RPC | Custom rich clients |
MCP Server Mode
codex mcp-server exposes two tools via stdio:
codex— Start a new Codex session with a promptcodex-reply— Continue an existing session
This is the standard MCP integration path. Any MCP client can call these tools.
App Server Mode
codex app-server (experimental) runs a WebSocket server with:
- JSON-RPC 2.0 protocol at
/codex - OpenAI-compatible HTTP endpoint at
/v1/responses - Session management, streaming events, tool-aware flows
- Bidirectional communication (server can push notifications)
This is designed for building custom IDE integrations or rich agent runtimes.
The Integration Paths
Path 1: MCP Server (Recommended)
Since nanobot is an MCP client, and Codex has an MCP server mode, they already speak the same protocol:
# nanobot.yaml
mcp_servers:
codex:
command: codex
args: ["mcp-server"]
Nanobot can now call codex and codex-reply as tools. This gives you:
- Codex’s reasoning capabilities as a tool
- Session continuity (start task, continue conversation)
- No custom protocol work
This works today. No app-server needed.
Path 2: App Server (Overkill)
To use app-server mode, you’d need to:
- Write a custom MCP server wrapper that speaks WebSocket JSON-RPC
- Translate between MCP tool calls and Codex app-server protocol
- Handle connection lifecycle, reconnection, auth
This is significant engineering work for marginal benefit.
Path 3: Parallel Execution (Current Approach)
The current setup uses codex exec via codex-parallel:
codex-parallel --research "query 1" "query 2" "query 3"
This spawns Codex processes, collects outputs, and saves sessions. It’s simple and works well for research tasks where you want parallel execution.
Candid Assessment: Is App Server Worth It?
Short answer: No.
Here’s why:
1. You Already Have MCP Integration
Nanobot + Codex MCP server mode is a natural fit. App-server solves a different problem (building custom clients), not agent-to-agent communication.
2. App Server Is Experimental
The protocol is versioned at 0.6.0 and marked experimental. GitHub issues show rough edges:
- JSON-RPC parsing bugs
- Prompt payload issues
- Evolving protocol surface
You’d be coupling to an unstable API for no clear benefit.
3. The Complexity Tax
App-server requires:
- WebSocket client implementation
- JSON-RPC 2.0 handling
- Session state management
- Auth/token handling
- Reconnection logic
MCP server mode requires:
- A config file entry
4. What You Actually Lose
The app-server features you’d miss:
- Bidirectional streaming (server-initiated events)
- Rich session management
- Custom approval flows
But for agent-to-agent use cases, MCP’s request-response model is sufficient. You don’t need server-push notifications when the agent is driving the conversation.
When App Server Would Make Sense
App-server becomes interesting if you want to:
- Build a custom UI — IDE extension, web dashboard
- Implement custom approval flows — human-in-the-loop with rich UX
- Multi-session orchestration — managing many concurrent Codex sessions with event-driven coordination
None of these are nanobot’s use case. Nanobot is the orchestrator, not the frontend.
What I’d Actually Recommend
For Research Tasks (Current)
Keep using codex-parallel with codex exec. It’s simple, parallelizable, and produces good results.
For Agent Delegation
If you want nanobot to delegate subtasks to Codex:
- Add Codex as an MCP server in nanobot’s config
- Call
codextool when you need deep reasoning - Use
codex-replyfor follow-up in the same session
# nanobot.yaml
mcp_servers:
codex:
command: /home/openclaw/.npm-global/bin/codex
args: ["mcp-server"]
For Multi-Agent Workflows
If you want multiple specialized agents:
- Run nanobot as the orchestrator
- Configure multiple MCP servers (Codex, filesystem, custom tools)
- Let the model decide when to call which tool
The Honest Truth
The app-server is a solution looking for a problem in this context. Nanobot’s architecture is built around MCP consumption. Codex’s MCP server mode is built for MCP clients. They’re already compatible.
The engineering effort to integrate app-server would be better spent on:
- Better prompts — improving how you use Codex today
- Tool development — adding MCP servers for your specific needs
- Workflow design — defining when to delegate vs. handle locally
Conclusion
Nanobot + Codex app-server integration? Not worth it.
Nanobot + Codex MCP server? Already works.
Nanobot + codex exec via codex-parallel? Solid for research tasks.
The right tool for the right job. App-server is for custom clients, not agent frameworks.
Research conducted using Codex CLI v0.104.0, nanobot framework documentation, and OpenAI’s official guides.