A thin OpenAI-compatible bridge for the ChatGPT-backed Codex backend.
It exposes:
GET /healthzGET /v1/modelsPOST /v1/chat/completionsPOST /v1/responses
Internally it translates OpenAI-compatible requests into Codex responses requests sent to:
https://chatgpt.com/backend-api/codex/responses
Typical deployment shape:
App / Tooling -> LiteLLM -> Codex API Bridge -> ChatGPT Codex backend
This project is meant for self-hosted / personal use when you want to expose Codex behind an OpenAI-compatible API surface.
- Interactive OAuth login inside the container (
logincommand) - Stores
access_token,refresh_token,expires_at,account_idin/data/credentials.json - Auto refresh on expiry
- Supports streamed and non-streamed
chat/completions - Supports streamed and non-streamed
responses - Basic tool-call translation (
tools,tool_choice, tool-call deltas) - Optional inbound API key guard with
BRIDGE_API_KEY - GitHub Actions workflow for automatic multi-arch Docker builds to GHCR:
linux/amd64linux/arm64
Verified against a real ChatGPT/Codex-backed upstream on a ChatGPT account:
gpt-5.4gpt-5.3-codexgpt-5.2-codexgpt-5.2gpt-5.1gpt-5.1-codex-maxgpt-5.1-codex-mini
Currently not supported on ChatGPT accounts:
gpt-5.3-codex-spark
docker pull ghcr.io/leespo/codex-api-bridge:latestmkdir -p ~/codex-api-bridge/data
cd ~/codex-api-bridgedocker run -d \
--name codex-bridge \
-p 8088:8088 \
-v "$(pwd)/data:/data" \
ghcr.io/leespo/codex-api-bridge:latestdocker exec -it codex-bridge node src/cli.js loginWhat happens:
- The container prints an OpenAI auth URL
- Open it in your browser
- After login, the browser redirects to something like:
http://localhost:1455/auth/callback?code=...&state=... - Copy the full callback URL from the address bar
- Paste it back into the terminal prompt
- Credentials are stored in
/data/credentials.json
If the localhost page itself does not load, that is still okay. The address bar URL is what matters.
docker exec -it codex-bridge node src/cli.js whoami
curl http://127.0.0.1:8088/healthzpodman pull ghcr.io/leespo/codex-api-bridge:latestmkdir -p ~/codex-api-bridge/data
cd ~/codex-api-bridge
podman run -d \
--name codex-bridge \
-p 8088:8088 \
-v "$(pwd)/data:/data:Z" \
ghcr.io/leespo/codex-api-bridge:latestpodman exec -it codex-bridge node src/cli.js logincurl http://127.0.0.1:8088/v1/chat/completions \
-H 'content-type: application/json' \
-d '{
"model": "gpt-5.4",
"messages": [
{"role": "user", "content": "hello"}
]
}'curl http://127.0.0.1:8088/v1/responses \
-H 'content-type: application/json' \
-d '{
"model": "gpt-5.4",
"input": "hello"
}'curl -N http://127.0.0.1:8088/v1/responses \
-H 'content-type: application/json' \
-d '{
"model": "gpt-5.4",
"stream": true,
"input": "hello"
}'model_list:
- model_name: codex-gpt-5-4
litellm_params:
model: openai/gpt-5.4
api_base: http://codex-bridge:8088/v1
api_key: dummy- Use the
openai/prefix in the model name when configuring an OpenAI-compatible upstream. - For OpenAI-compatible routing,
api_baseshould point at this bridge and typically end with/v1. - If LiteLLM and
codex-bridgeare running in separate containers, make sure they are on the same user-defined Docker network. - If LiteLLM's UI Test Connection shows a strange cURL example, do not trust that rendered cURL literally. LiteLLM performs the real health check server-side.
docker network create ai-net
docker run -d \
--name codex-bridge \
--network ai-net \
-p 8088:8088 \
-v "$(pwd)/data:/data" \
ghcr.io/leespo/codex-api-bridge:latestThen LiteLLM can use:
http://codex-bridge:8088/v1
| Variable | Default | Meaning |
|---|---|---|
PORT |
8088 |
HTTP listen port |
DATA_DIR |
/data |
Credential storage directory |
CREDENTIALS_FILE |
<DATA_DIR>/credentials.json |
Credential file path |
CODEX_BASE_URL |
https://chatgpt.com/backend-api |
Upstream base URL |
CODEX_BRIDGE_MODELS |
built-in list | Comma-separated models exposed by /v1/models |
CODEX_DEFAULT_MODEL |
first model | Default model fallback |
BRIDGE_API_KEY |
empty | Optional inbound Bearer token required by the bridge |
CODEX_TEXT_VERBOSITY |
medium |
Upstream text verbosity |
CODEX_REASONING_EFFORT |
empty | Optional upstream reasoning effort |
CODEX_REASONING_SUMMARY |
auto |
Optional upstream reasoning summary |
CODEX_ORIGINATOR |
codex-bridge |
Upstream originator header |
This repository includes:
.github/workflows/docker.yml
On pushes to main and tags matching v*, GitHub Actions builds a multi-arch image for:
linux/amd64linux/arm64
and publishes it to:
ghcr.io/leespo/codex-api-bridge
docker build -t codex-api-bridge:dev .
# or
podman build -t localhost/local/codex-openai-bridge:dev .docker run -d \
--name codex-bridge \
-p 8088:8088 \
-v "$(pwd)/data:/data" \
codex-api-bridge:devnode test/mock-codex-server.jssrc/cli.js
src/server.js
src/oauth.js
src/codex-client.js
src/transform.js
src/sse.js
src/credentials.js
.github/workflows/docker.yml
Dockerfile
- This is built for personal / self-hosted use.
- Upstream Codex is not a stable public OpenAI Platform endpoint. Expect breakage when OpenAI changes headers, events, or request fields.
- Tool support is intentionally minimal but usable for LiteLLM-style chat-completions and responses flows.
- Several OpenAI-style compatibility fields are currently accepted by the bridge but dropped before forwarding because the ChatGPT-backed Codex upstream rejects them explicitly.
This currently includes:
- token-cap fields:
max_tokensmax_completion_tokensmax_output_tokens
- sampling / penalty fields:
temperaturetop_ppresence_penaltyfrequency_penalty
The bridge focuses on text/chat/responses compatibility first. It is not a full implementation of the entire OpenAI API surface.