feat: add LM Studio provider for local Qwen model support#340
feat: add LM Studio provider for local Qwen model support#340anandgupta42 wants to merge 3 commits intomainfrom
Conversation
Register `lmstudio` as an OpenAI-compatible provider in `opencode.jsonc`, pointing at the default LM Studio local server (`localhost:1234`).
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdded a new Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.opencode/opencode.jsonc (1)
10-10: Consider making baseURL configurable via environment variable.The empty
envarray means the LM Studio port and host are hardcoded tolocalhost:1234. While this is reasonable for the default setup, users running LM Studio on different ports or hosts would need to modify the config file directly.Consider documenting that users can override this by modifying the config, or optionally support an environment variable like
LMSTUDIO_BASE_URLfor easier customization.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/opencode.jsonc at line 10, The config currently has an empty "env" array which forces a hardcoded LM Studio host/port; add support for an environment variable (e.g., LMSTUDIO_BASE_URL) by adding an entry to the "env" array and update the code that reads baseURL (where baseURL is currently sourced from the static config) to fall back to process.env.LMSTUDIO_BASE_URL if present, documenting the new variable in the config comment; reference the "env" array in .opencode/opencode.jsonc and the code paths that construct/consume baseURL (search for variables named baseURL or functions that initialize the LM Studio client) to implement this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.opencode/opencode.jsonc:
- Around line 19-22: The current .opencode/opencode.jsonc "limit" block sets
"context" to 32768 and "output" to 4096; update these numeric values to match
Qwen2.5 capabilities by setting "context": 131072 and "output": 8192 in the
"limit" object (if you plan to deploy smaller 0.5B–3B variants, keep "output":
8192 but reduce "context" back to 32768); modify the "limit" object's "context"
and "output" fields accordingly.
---
Nitpick comments:
In @.opencode/opencode.jsonc:
- Line 10: The config currently has an empty "env" array which forces a
hardcoded LM Studio host/port; add support for an environment variable (e.g.,
LMSTUDIO_BASE_URL) by adding an entry to the "env" array and update the code
that reads baseURL (where baseURL is currently sourced from the static config)
to fall back to process.env.LMSTUDIO_BASE_URL if present, documenting the new
variable in the config comment; reference the "env" array in
.opencode/opencode.jsonc and the code paths that construct/consume baseURL
(search for variables named baseURL or functions that initialize the LM Studio
client) to implement this change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: fa6f723e-b8bc-4828-9a43-c8f755b4b0ea
📒 Files selected for processing (1)
.opencode/opencode.jsonc
- Port: 11434 (not default 1234) - Models: `gpt-oss:20b` and `deepseek-r1:70b` (actual loaded models)
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.opencode/opencode.jsonc:
- Line 13: The baseURL in .opencode/opencode.jsonc is pointing to Ollama's port
(http://localhost:11434/v1); update the "baseURL" value to use LM Studio's
default port by changing the URL to http://localhost:1234/v1 so the
OpenAI-compatible endpoint is correct.
- Around line 15-31: The models section is missing the documented key used
elsewhere; add or rename an entry so "lmstudio/qwen" exists in the "models" map
(or update callers to use one of the existing keys like "gpt-oss:20b" or
"deepseek-r1:70b"); specifically, add a "lmstudio/qwen" model object with the
same structure (name, tool_call, limit) or change references to match the
existing keys to ensure callers referencing "lmstudio/qwen" resolve correctly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a7b7735a-9300-48a3-bb6c-daf544578847
📒 Files selected for processing (1)
.opencode/opencode.jsonc
.opencode/opencode.jsonc
Outdated
| "models": { | ||
| "gpt-oss:20b": { | ||
| "name": "GPT-OSS 20B (LM Studio)", | ||
| "tool_call": true, | ||
| "limit": { | ||
| "context": 32768, | ||
| "output": 4096, | ||
| }, | ||
| }, | ||
| "deepseek-r1:70b": { | ||
| "name": "DeepSeek R1 70B (LM Studio)", | ||
| "tool_call": true, | ||
| "limit": { | ||
| "context": 32768, | ||
| "output": 4096, | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Configured model keys don’t satisfy the documented lmstudio/qwen usage.
The PR objective/examples call out lmstudio/qwen, but Lines 15-31 only define gpt-oss:20b and deepseek-r1:70b. Users following the documented key won’t find a matching model entry.
Suggested config adjustment
"models": {
+ "qwen": {
+ "name": "Qwen (LM Studio)",
+ "tool_call": true,
+ "limit": {
+ "context": 32768,
+ "output": 4096
+ }
+ },
"gpt-oss:20b": {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.opencode/opencode.jsonc around lines 15 - 31, The models section is missing
the documented key used elsewhere; add or rename an entry so "lmstudio/qwen"
exists in the "models" map (or update callers to use one of the existing keys
like "gpt-oss:20b" or "deepseek-r1:70b"); specifically, add a "lmstudio/qwen"
model object with the same structure (name, tool_call, limit) or change
references to match the existing keys to ensure callers referencing
"lmstudio/qwen" resolve correctly.
- Use LM Studio default port 1234 (not Ollama's 11434) - Replace hardcoded model IDs with commented examples users fill in - Add `LMSTUDIO_API_KEY` env var support - Add LM Studio section to providers.md docs - Add LM Studio tab to quickstart.md provider examples
What does this PR do?
Add LM Studio as an OpenAI-compatible provider in
opencode.jsoncso local models (e.g., Qwen) can be used vialmstudio/qwen.Type of change
Issue for this PR
Closes #339
How did you verify your code works?
packages/opencode/src/config/config.tslocalhost:1234/v1Checklist
Summary by CodeRabbit