Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
7c93e45
Merge pull request #2 from anyrobert/feature/claude
anyrobert Mar 4, 2026
945ba48
fix build to run cli after install
anyrobert Mar 11, 2026
d934552
feat(logging): add verbose traffic logging via CURSOR_BRIDGE_VERBOSE
nickxiao-zoom Mar 12, 2026
8207329
Merge pull request #4 from nicoster/main
anyrobert Mar 12, 2026
667a4c3
fix: pass through auto model identifier to Cursor API
invalid-email-address Mar 14, 2026
549fee4
Merge pull request #8 from sxin0/fix/auto-model-passthrough
anyrobert Mar 14, 2026
eda87f6
feat: add centralized env module
anyrobert Mar 14, 2026
9e4c650
refactor: config and CLI to use env module
anyrobert Mar 14, 2026
bb7d330
feat: deduplicate streaming output
anyrobert Mar 14, 2026
bdf4510
feat: bypass Windows cmd.exe length limit
anyrobert Mar 14, 2026
03a7738
fix: add request-log mocks in server tests
anyrobert Mar 14, 2026
e83efaf
docs: document env variables and centralized env
anyrobert Mar 14, 2026
820ae72
chore: bump version to 0.3.0
anyrobert Mar 14, 2026
d975222
Merge pull request #9 from anyrobert/release-0.3.0
anyrobert Mar 14, 2026
98067ee
feat(config): add maxMode option and agent script path for preflight
anyrobert Mar 15, 2026
6cee0d4
feat: add max-mode preflight module
anyrobert Mar 15, 2026
072d62b
feat(process): run max-mode preflight and set CURSOR_CONFIG_DIR
anyrobert Mar 15, 2026
889cd2f
feat(agent-runner): pass maxMode from config to run/runStreaming
anyrobert Mar 15, 2026
fc7e811
chore(server): log max mode in startup banner
anyrobert Mar 15, 2026
7c4f605
docs: document CURSOR_BRIDGE_MAX_MODE in .env.example
anyrobert Mar 15, 2026
cff38a4
test(server): add maxMode to test config
anyrobert Mar 15, 2026
524da45
Merge pull request #10 from anyrobert/feature/max-mode
anyrobert Mar 15, 2026
c2e95fd
docs: document CURSOR_BRIDGE_MAX_MODE in README
anyrobert Mar 15, 2026
5d1253f
feat: add SDK client for programmatic proxy usage
anyrobert Mar 16, 2026
ec30de7
examples: add SDK client examples (default, OpenAI-style, streaming)
anyrobert Mar 16, 2026
89d3df1
docs: update examples README for SDK usage
anyrobert Mar 16, 2026
11b54c9
docs: document SDK client in main README
anyrobert Mar 16, 2026
192c1a4
chore: bump version to 0.4.0
anyrobert Mar 16, 2026
0cd9fa1
Add ACP (Agent Client Protocol) transport for prompt delivery
Azukay Mar 17, 2026
1c1d5e3
fix(acp): spawn options, per-request timeout, SIGKILL for Windows
Azukay Mar 18, 2026
9375a3b
fix(acp): clear pending requests on child exit to avoid timer leaks
Azukay Mar 18, 2026
64c6f6a
Fix ACP on Windows: skip authenticate, pass model, versioned layout
Azukay Mar 18, 2026
c24d104
Add CURSOR_BRIDGE_ACP_RAW_DEBUG flag for raw JSON-RPC logging
Azukay Mar 18, 2026
d906a67
Isolate agent from global rules when chatOnlyWorkspace
Azukay Mar 18, 2026
52484fa
Address review findings: env-only API key, acpRawDebug test, acp-clie…
Azukay Mar 18, 2026
2f8cb89
refactor(workspace): extract getChatOnlyEnvOverrides, add Windows APP…
Azukay Mar 18, 2026
d44c945
Estimate token usage (chars/4) in chat completions response
Azukay Mar 19, 2026
c17a6db
Ignore .cursor/ (local rules)
Azukay Mar 19, 2026
1f0bfe0
fix(proxy): return display model when request is auto
Azukay Mar 19, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .claude/settings.local.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"permissions": {
"allow": [
"Bash(npm pack:*)",
"Bash(npm install:*)",
"Bash(cursor-api-proxy:*)",
"Bash(node:*)",
"Bash(npm:*)"
]
}
}
4 changes: 4 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,7 @@

# Optional: timeout per completion in ms
# CURSOR_BRIDGE_TIMEOUT_MS=300000

# Optional: enable Cursor Max Mode (larger context, more tool calls). Works when using
# CURSOR_AGENT_NODE/SCRIPT or Windows .cmd layout (agent.cmd with node.exe + index.js in same dir).
# CURSOR_BRIDGE_MAX_MODE=true
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,5 @@ dist/
*.ts.net.key
# Session/request log (optional; path configurable via CURSOR_BRIDGE_SESSIONS_LOG)
sessions.log
# Local Cursor rules (no-pii etc.) — keep out of public fork
.cursor/
117 changes: 94 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@

OpenAI-compatible proxy for Cursor CLI. Expose Cursor models on localhost so any LLM client (OpenAI SDK, LiteLLM, LangChain, etc.) can call them as a standard chat API.

## Prerequisites
This package works as **one npm dependency**: use it as an **SDK** in your app to call the proxy API, and/or run the **CLI** to start the proxy server. Core behavior is unchanged.

## Prerequisites (required for the proxy to work)

- **Node.js** 18+
- **Cursor CLI** (`agent`). This project is developed and tested with `agent` version **2026.02.27-e7d2ef6**. Install and log in:
- **Cursor agent CLI** (`agent`). This package does **not** install or bundle the CLI. You must install and set it up separately. This project is developed and tested with `agent` version **2026.02.27-e7d2ef6**.

```bash
curl https://cursor.com/install -fsS | bash
Expand All @@ -17,29 +19,37 @@ OpenAI-compatible proxy for Cursor CLI. Expose Cursor models on localhost so any

## Install

**From npm (use as SDK in another project):**

```bash
npm install cursor-api-proxy
```

**From source (develop or run CLI locally):**

```bash
cd ~/personal/cursor-api-proxy
git clone <this-repo>
cd cursor-api-proxy
npm install
npm run build
```

## Run
## Run the proxy (CLI)

Start the server so the API is available (e.g. for the SDK or any HTTP client):

```bash
npm start
# or: node dist/cli.js
# or: npx cursor-api-proxy (if linked globally)
npx cursor-api-proxy
# or from repo: npm start / node dist/cli.js
```

By default the server listens on **http://127.0.0.1:8765**.

To expose it to your Tailscale network:
To expose on your network (e.g. Tailscale):

```bash
npm start -- --tailscale
npx cursor-api-proxy --tailscale
```

This binds to `0.0.0.0` unless `CURSOR_BRIDGE_HOST` is explicitly set. Optionally set `CURSOR_BRIDGE_API_KEY` to require `Authorization: Bearer <key>` on requests.
By default the server listens on **http://127.0.0.1:8765**. Optionally set `CURSOR_BRIDGE_API_KEY` to require `Authorization: Bearer <key>` on requests.

### HTTPS with Tailscale (MagicDNS)

Expand Down Expand Up @@ -78,20 +88,25 @@ To serve over HTTPS so browsers and clients trust the connection (e.g. `https://
- Base URL: `https://macbook.tail4048eb.ts.net:8765/v1` (use your MagicDNS name and port)
- Browsers will show a padlock; no certificate warnings when using Tailscale-issued certs.

## Usage from other services
## Use as SDK in another project

Install the package and ensure the **Cursor agent CLI is installed and set up** (see Prerequisites). When you use the SDK with the default URL, **the proxy starts in the background automatically** if it is not already running. You can still start it yourself with `npx cursor-api-proxy` or set `CURSOR_PROXY_URL` to point at an existing proxy (then the SDK will not start another).

- **Base URL**: `http://127.0.0.1:8765/v1` (override with `CURSOR_PROXY_URL` or options).
- **API key**: Use any value (e.g. `unused`), or set `CURSOR_BRIDGE_API_KEY` and pass it in options or env.
- **Disable auto-start**: Pass `startProxy: false` (or use a custom `baseUrl`) if you run the proxy yourself and don’t want the SDK to start it.
- **Shutdown behavior**: When the SDK starts the proxy, it also stops it automatically when the Node.js process exits or receives normal termination signals. `stopManagedProxy()` is still available if you want to shut it down earlier. `SIGKILL` cannot be intercepted.

- **Base URL**: `http://127.0.0.1:8765/v1`
- **API key**: Use any value (e.g. `unused`), or set `CURSOR_BRIDGE_API_KEY` and send it as `Authorization: Bearer <key>`.
### Option A: OpenAI SDK + helper (recommended)

### Example (OpenAI client)
This is an optional consumer-side example. `openai` is not a dependency of `cursor-api-proxy`; install it only in the app where you want to use this example.

```js
import OpenAI from "openai";
import { getOpenAIOptionsAsync } from "cursor-api-proxy";

const client = new OpenAI({
baseURL: "http://127.0.0.1:8765/v1",
apiKey: process.env.CURSOR_BRIDGE_API_KEY || "unused",
});
const opts = await getOpenAIOptionsAsync(); // starts proxy if needed
const client = new OpenAI(opts);

const completion = await client.chat.completions.create({
model: "gpt-5.2",
Expand All @@ -100,6 +115,33 @@ const completion = await client.chat.completions.create({
console.log(completion.choices[0].message.content);
```

For a sync config without auto-start, use `getOpenAIOptions()` and ensure the proxy is already running.

### Option B: Minimal client (no OpenAI SDK)

```js
import { createCursorProxyClient } from "cursor-api-proxy";

const proxy = createCursorProxyClient(); // proxy starts on first request if needed
const data = await proxy.chatCompletionsCreate({
model: "auto",
messages: [{ role: "user", content: "Hello" }],
});
console.log(data.choices?.[0]?.message?.content);
```

### Option C: Raw OpenAI client (no SDK import from this package)

```js
import OpenAI from "openai";

const client = new OpenAI({
baseURL: "http://127.0.0.1:8765/v1",
apiKey: process.env.CURSOR_BRIDGE_API_KEY || "unused",
});
// Start the proxy yourself (npx cursor-api-proxy) or use Option A/B for auto-start.
```

### Endpoints

| Method | Path | Description |
Expand All @@ -111,6 +153,8 @@ console.log(completion.choices[0].message.content);

## Environment variables

Environment handling is centralized in one module. Aliases, defaults, path resolution, platform fallbacks, and `--tailscale` host behavior are resolved consistently before the server starts.

| Variable | Default | Description |
|----------|---------|-------------|
| `CURSOR_BRIDGE_HOST` | `127.0.0.1` | Bind address |
Expand All @@ -126,8 +170,35 @@ console.log(completion.choices[0].message.content);
| `CURSOR_BRIDGE_TLS_CERT` | — | Path to TLS certificate file (e.g. Tailscale cert). Use with `CURSOR_BRIDGE_TLS_KEY` for HTTPS. |
| `CURSOR_BRIDGE_TLS_KEY` | — | Path to TLS private key file. Use with `CURSOR_BRIDGE_TLS_CERT` for HTTPS. |
| `CURSOR_BRIDGE_SESSIONS_LOG` | `~/.cursor-api-proxy/sessions.log` | Path to log file; each request is appended as a line (timestamp, method, path, IP, status). |
| `CURSOR_BRIDGE_CHAT_ONLY_WORKSPACE` | `true` | When `true` (default), the CLI runs in an empty temp dir so it **cannot read or write your project**; pure chat only. Set to `false` to pass the real workspace (e.g. for `X-Cursor-Workspace`). |
| `CURSOR_AGENT_BIN` | `agent` | Path to Cursor CLI binary |
| `CURSOR_BRIDGE_CHAT_ONLY_WORKSPACE` | `true` | When `true` (default), the CLI runs in an empty temp dir so it **cannot read or write your project**; pure chat only. The proxy also overrides `HOME`, `USERPROFILE`, and `CURSOR_CONFIG_DIR` so the agent cannot load rules from `~/.cursor` or project rules from elsewhere. Set to `false` to pass the real workspace (e.g. for `X-Cursor-Workspace`). |
| `CURSOR_BRIDGE_VERBOSE` | `false` | When `true`, print full request messages and response content to stdout for every completion (both stream and sync). |
| `CURSOR_BRIDGE_MAX_MODE` | `false` | When `true`, enable Cursor **Max Mode** for all requests (larger context window, higher tool-call limits). The proxy writes `maxMode: true` to `cli-config.json` before each run. Works when using `CURSOR_AGENT_NODE`/`CURSOR_AGENT_SCRIPT`, the versioned layout (`versions/YYYY.MM.DD-commit/`), or node.exe + index.js next to agent.cmd. |
| `CURSOR_BRIDGE_PROMPT_VIA_STDIN` | `false` | When `true`, the proxy sends the user prompt to the agent via **stdin** instead of as a command-line argument. Use this on Windows if the agent does not receive the full prompt when it is passed in argv (e.g. “no headlines” / truncated). The Cursor agent must support reading the prompt from stdin when no positional prompt is given; if it does not, this option has no effect or may cause errors. |
| `CURSOR_BRIDGE_USE_ACP` | `false` | When `true`, the proxy uses **ACP (Agent Client Protocol)** to talk to the Cursor CLI: it spawns `agent acp` and sends the prompt via JSON-RPC over stdio. This avoids Windows argv limits and quoting issues entirely. On Windows with `agent.cmd`, the proxy auto-detects the versioned layout and spawns Node directly. See [Cursor ACP docs](https://cursor.com/docs/cli/acp). To debug ACP hangs, set `NODE_DEBUG=cursor-api-proxy:acp` to log each ACP step. |
| `CURSOR_BRIDGE_ACP_SKIP_AUTHENTICATE` | auto | When `CURSOR_API_KEY` is set, the proxy skips the ACP authenticate step (avoids hangs). Set to `true` to skip authenticate when using `agent login` instead. |
| `CURSOR_BRIDGE_ACP_RAW_DEBUG` | `false` | When `1` or `true`, log every raw JSON-RPC line from ACP stdout (very verbose). Requires `NODE_DEBUG=cursor-api-proxy:acp` to see output. |
| `CURSOR_AGENT_BIN` | `agent` | Path to Cursor CLI binary. Alias precedence: `CURSOR_AGENT_BIN`, then `CURSOR_CLI_BIN`, then `CURSOR_CLI_PATH`. |
| `CURSOR_AGENT_NODE` | — | **(Windows)** Path to Node.js executable. When set together with `CURSOR_AGENT_SCRIPT`, spawns Node directly instead of going through cmd.exe, bypassing the ~8191 character command line limit. |
| `CURSOR_AGENT_SCRIPT` | — | **(Windows)** Path to the agent script (e.g. `agent.cmd` or the underlying `.js`). Use with `CURSOR_AGENT_NODE` to bypass cmd.exe for long prompts. |

Notes:
- `--tailscale` changes the default host to `0.0.0.0` only when `CURSOR_BRIDGE_HOST` is not already set.
- ACP `session/request_permission` uses `reject-once` (least-privilege) so the agent cannot grant file/tool access; intentional for chat-only mode.
- Relative paths such as `CURSOR_BRIDGE_WORKSPACE`, `CURSOR_BRIDGE_SESSIONS_LOG`, `CURSOR_BRIDGE_TLS_CERT`, and `CURSOR_BRIDGE_TLS_KEY` are resolved from the current working directory.

#### Windows command line limit bypass

On Windows, cmd.exe has a ~8191 character limit on the command line. Long prompts passed as arguments can exceed this and cause the agent to fail. When `agent.cmd` is used (e.g. from `%LOCALAPPDATA%\cursor-agent\`), the proxy **auto-detects the versioned layout** (`versions/YYYY.MM.DD-commit/`) and spawns `node.exe` + `index.js` from the latest version directly, bypassing cmd.exe and PowerShell. If auto-detection does not apply, set both `CURSOR_AGENT_NODE` (path to `node.exe`) and `CURSOR_AGENT_SCRIPT` (path to the agent script). The proxy will then spawn Node directly with the script and args instead of using cmd.exe, avoiding the limit.

Example (adjust paths to your install):

```bash
set CURSOR_AGENT_NODE=C:\Program Files\nodejs\node.exe
set CURSOR_AGENT_SCRIPT=C:\path\to\Cursor\resources\agent\agent.cmd
# or for cursor-agent versioned layout:
# set CURSOR_AGENT_NODE=%LOCALAPPDATA%\cursor-agent\versions\2026.03.11-6dfa30c\node.exe
# set CURSOR_AGENT_SCRIPT=%LOCALAPPDATA%\cursor-agent\versions\2026.03.11-6dfa30c\index.js
```

CLI flags:

Expand All @@ -140,7 +211,7 @@ Optional per-request override: send header `X-Cursor-Workspace: <path>` to use a

## Streaming

The proxy supports `stream: true` on `POST /v1/chat/completions`. It returns Server-Sent Events (SSE) in OpenAI’s streaming format. Cursor CLI returns the full response in one go, so the proxy sends that response as a single content delta (clients still receive a valid SSE stream).
The proxy supports `stream: true` on `POST /v1/chat/completions` and `POST /v1/messages`. It returns Server-Sent Events (SSE) in OpenAI’s streaming format. Cursor CLI emits incremental deltas plus a final full message; the proxy deduplicates output so clients receive each chunk only once.

**Test streaming:** from repo root, with the proxy running:

Expand Down
54 changes: 43 additions & 11 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,57 @@
# Examples

## test-stream.mjs
**Prerequisites for all examples:** Cursor CLI installed and authenticated (`agent login`). The SDK examples **start the proxy in the background automatically** if it is not already running.

Tests **streaming** (`stream: true`) against the proxy.
Optional: set `CURSOR_PROXY_URL` to use a different proxy URL (default `http://127.0.0.1:8765`). Set `startProxy: false` when creating the client if you run the proxy yourself.

**Prerequisites**
---

1. Start the proxy from the repo root: `npm start`
2. Cursor CLI installed and authenticated: `agent login`
## SDK examples (using the cursor-api-proxy package)

**Run**
### sdk-client.mjs

Uses the **minimal client** (`createCursorProxyClient`). Proxy starts automatically on first request. No extra dependencies.

```bash
# From repo root
node examples/test-stream.mjs
npm run build # if running from repo
node examples/sdk-client.mjs
```

Optional: use a different proxy URL:
### sdk-openai.mjs

Uses **getOpenAIOptionsAsync** with the **OpenAI SDK**. Proxy starts automatically. This is an optional example; `openai` is not part of this package and only needs to be installed in the project where you run the example.

```bash
CURSOR_PROXY_URL=http://127.0.0.1:8765 node examples/test-stream.mjs
npm install openai
node examples/sdk-openai.mjs
```

### sdk-stream.mjs

Uses the **minimal client**’s **fetch** for streaming. Proxy starts automatically on first request.

```bash
node examples/sdk-stream.mjs
```

---

## Raw fetch examples (no SDK)

### test.mjs

Non-streaming chat completion via raw `fetch` (no cursor-api-proxy SDK import).

```bash
node examples/test.mjs
```

### test-stream.mjs

Streaming chat completion via raw `fetch`.

```bash
node examples/test-stream.mjs
```

The script sends a short prompt and prints each streamed chunk as it arrives, then the total character count.
Prints each streamed chunk and the total character count.
34 changes: 34 additions & 0 deletions examples/sdk-client.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
#!/usr/bin/env node
/**
* Example: use the SDK minimal client (createCursorProxyClient).
* The proxy starts in the background automatically if not already running,
* and the SDK stops it when this script exits.
*
* Prereq: Cursor CLI installed and logged in (agent login)
*
* Run: node examples/sdk-client.mjs
*/

import { createCursorProxyClient } from "cursor-api-proxy";

async function main() {
const proxy = createCursorProxyClient();

console.log("Proxy will start automatically if needed. Base URL:", proxy.baseUrl);
console.log("---");

const data = await proxy.chatCompletionsCreate({
model: "auto",
messages: [{ role: "user", content: "Say hello in one short sentence." }],
});

const content = data.choices?.[0]?.message?.content ?? "(no content)";
console.log("Response:", content);
console.log("---");
console.log("Full response (choices):", JSON.stringify(data.choices, null, 2));
}

main().catch((err) => {
console.error(err);
process.exit(1);
});
36 changes: 36 additions & 0 deletions examples/sdk-openai.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
#!/usr/bin/env node
/**
* Example: use the SDK with the OpenAI client (getOpenAIOptionsAsync).
* The proxy starts in the background automatically if not already running,
* and the SDK stops it when this script exits.
*
* Prereqs: Cursor CLI installed and logged in (agent login). Install OpenAI SDK: npm install openai
*
* Run: node examples/sdk-openai.mjs
*/

import OpenAI from "openai";
import { getOpenAIOptionsAsync } from "cursor-api-proxy";

async function main() {
const opts = await getOpenAIOptionsAsync();
const client = new OpenAI(opts);

console.log("Chat completion via OpenAI SDK + cursor-api-proxy (proxy starts automatically if needed)");
console.log("---");

const completion = await client.chat.completions.create({
model: "auto",
messages: [{ role: "user", content: "Say hello in one short sentence." }],
});

const content = completion.choices[0]?.message?.content ?? "(no content)";
console.log("Response:", content);
console.log("---");
console.log("Usage:", completion.usage ?? "N/A");
}

main().catch((err) => {
console.error(err);
process.exit(1);
});
Loading