You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Streaming: SSE with response.created, response.output_text.delta, response.completed
Non-streaming: { response, usage, responseId }
image_generation tool
Declare {"type": "image_generation", ...} in tools[] to let the model invoke
the server-side image generation backend (gpt-image-2). Requires a ChatGPT
Plus or higher account — free plans have the tool silently stripped upstream
and the model falls back to returning SVG text.
Width and height must both be divisible by 16. Longest edge ≤ 3840 px. Total pixel budget ≈ 8 MP (3072x3072 rejected). Resolutions below 1024 px also rejected (min pixel budget)
output_format
png / jpeg / webp
png
gif is rejected
output_compression
integer 0–100
100
jpeg / webp only — PNG rejects any non-100
background
auto / opaque
auto
transparent is rejected for this model
moderation
auto / low
auto
other enums rejected
partial_images
integer 0–3
0
>3 rejected
Silently rewritten / hard-rejected fields:
model — whatever you send, upstream forces gpt-image-2.
quality — any value is echoed back as auto; the user-supplied value has no effect.
n — rejected (unknown_parameter); one image per call.
The optional bridge runs on a separate listener, defaulting to http://127.0.0.1:11434.
It is disabled by default and can be controlled through Dashboard settings or the admin
API. Ollama endpoints are intentionally unauthenticated; keep the listener bound to
localhost unless you explicitly trust the network.
Browser CORS access is restricted to loopback origins (localhost, 127.x.x.x,
and ::1) so non-local web pages cannot read bridge responses by default. The
bridge injects the configured Codex Proxy API key for /v1/* passthrough
requests, so exposing it beyond localhost also exposes the main proxy API
without requiring clients to know that key.
Method
Path
Description
GET
/api/version
Version probe → { version }
GET
/api/tags
Model list in Ollama format
POST
/api/show
Model metadata and capabilities
POST
/api/chat
Chat completions, streaming as NDJSON by default
Any
/v1/*
OpenAI-compatible passthrough to the main proxy
// POST http://127.0.0.1:11434/api/chat
{
"model": "codex",
"messages": [{"role": "user", "content": "Hello"}],
"stream": true,
"think": "medium"// optional: false | true | low | medium | high | xhigh
}