Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions .opencode/opencode.jsonc
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,35 @@
"opencode": {
"options": {},
},
// LM Studio — local inference via OpenAI-compatible API
// 1. Open LM Studio → Developer tab → Start Server (default port: 1234)
// 2. Load a model in LM Studio
// 3. Run: curl http://localhost:1234/v1/models to find the model ID
// 4. Add the model ID to the "models" section below
// 5. Use as: altimate-code run -m lmstudio/<model-id>
"lmstudio": {
"name": "LM Studio",
"npm": "@ai-sdk/openai-compatible",
"env": ["LMSTUDIO_API_KEY"],
"options": {
"apiKey": "lm-studio",
"baseURL": "http://localhost:1234/v1",
},
"models": {
// Add your loaded models here. The key must match the model ID from LM Studio.
// Examples:
// "qwen2.5-7b-instruct": {
// "name": "Qwen 2.5 7B Instruct",
// "tool_call": true,
// "limit": { "context": 131072, "output": 8192 }
// },
// "deepseek-r1:70b": {
// "name": "DeepSeek R1 70B",
// "tool_call": true,
// "limit": { "context": 65536, "output": 8192 }
// }
},
},
},
"permission": {
"edit": {
Expand Down
42 changes: 42 additions & 0 deletions docs/docs/configure/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,48 @@ No API key needed. Runs entirely on your local machine.
!!! info
Make sure Ollama is running before starting altimate. Install it from [ollama.com](https://ollama.com) and pull your desired model with `ollama pull llama3.1`.

## LM Studio (Local)

Run local models through [LM Studio](https://lmstudio.ai)'s OpenAI-compatible server:

```json
{
"provider": {
"lmstudio": {
"name": "LM Studio",
"npm": "@ai-sdk/openai-compatible",
"env": ["LMSTUDIO_API_KEY"],
"options": {
"apiKey": "lm-studio",
"baseURL": "http://localhost:1234/v1"
},
"models": {
"qwen2.5-7b-instruct": {
"name": "Qwen 2.5 7B Instruct",
"tool_call": true,
"limit": { "context": 131072, "output": 8192 }
}
}
}
},
"model": "lmstudio/qwen2.5-7b-instruct"
}
```

**Setup:**

1. Open LM Studio → **Developer** tab → **Start Server** (default port: 1234)
2. Load a model in LM Studio
3. Find your model ID: `curl http://localhost:1234/v1/models`
4. Add the model ID to the `models` section in your config
5. Use it: `altimate-code run -m lmstudio/<model-id>`

!!! tip
The model key in your config must match the model ID returned by LM Studio's `/v1/models` endpoint. If you change models in LM Studio, update the config to match.

!!! note
If you changed LM Studio's default port, update the `baseURL` accordingly. No real API key is needed — the `"lm-studio"` placeholder satisfies the SDK requirement.

## OpenRouter

```json
Expand Down
25 changes: 25 additions & 0 deletions docs/docs/getting-started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,31 @@ Switch providers at any time by updating the `provider` and `model` fields in `a
}
```

=== "LM Studio (Local)"

```json
{
"provider": {
"lmstudio": {
"name": "LM Studio",
"npm": "@ai-sdk/openai-compatible",
"options": {
"apiKey": "lm-studio",
"baseURL": "http://localhost:1234/v1"
},
"models": {
"qwen2.5-7b-instruct": {
"name": "Qwen 2.5 7B Instruct",
"tool_call": true,
"limit": { "context": 131072, "output": 8192 }
}
}
}
},
"model": "lmstudio/qwen2.5-7b-instruct"
}
```

=== "OpenRouter"

```json
Expand Down
Loading