Skip to content

feat: add LM Studio provider for local Qwen model support#340

Open
anandgupta42 wants to merge 3 commits intomainfrom
config/lmstudio-provider
Open

feat: add LM Studio provider for local Qwen model support#340
anandgupta42 wants to merge 3 commits intomainfrom
config/lmstudio-provider

Conversation

@anandgupta42
Copy link
Contributor

@anandgupta42 anandgupta42 commented Mar 20, 2026

What does this PR do?

Add LM Studio as an OpenAI-compatible provider in opencode.jsonc so local models (e.g., Qwen) can be used via lmstudio/qwen.

Type of change

  • New feature (non-breaking change which adds functionality)

Issue for this PR

Closes #339

How did you verify your code works?

  • Verified config schema matches the provider config structure in packages/opencode/src/config/config.ts
  • LM Studio uses the standard OpenAI-compatible API at localhost:1234/v1

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • New and existing unit tests pass locally with my changes

Summary by CodeRabbit

  • New Features
    • Added LM Studio as a local provider option and support for referencing local models (example: qwen2.5-7b-instruct) with tool-calling and expanded context/output limits.
  • Documentation
    • Added setup and quickstart instructions for using LM Studio locally, including model selection and connection guidance.

Register `lmstudio` as an OpenAI-compatible provider in
`opencode.jsonc`, pointing at the default LM Studio local
server (`localhost:1234`).
Copy link

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude Code Review

This repository is configured for manual code reviews. Comment @claude review to trigger a review.

@coderabbitai
Copy link

coderabbitai bot commented Mar 20, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: c446343c-7a20-4dac-956f-827aead577a2

📥 Commits

Reviewing files that changed from the base of the PR and between ba4d00f and 7be252c.

📒 Files selected for processing (3)
  • .opencode/opencode.jsonc
  • docs/docs/configure/providers.md
  • docs/docs/getting-started/quickstart.md
✅ Files skipped from review due to trivial changes (1)
  • docs/docs/getting-started/quickstart.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • .opencode/opencode.jsonc

📝 Walkthrough

Walkthrough

Added a new provider.lmstudio entry to .opencode/opencode.jsonc registering an OpenAI-compatible LM Studio provider (adapter: @ai-sdk/openai-compatible) with env LMSTUDIO_API_KEY, options pointing at http://localhost:1234/v1, and an (empty) models map; docs updated with setup and usage instructions.

Changes

Cohort / File(s) Summary
Provider config
.opencode/opencode.jsonc
Added provider.lmstudio with name: "LM Studio", npm: "@ai-sdk/openai-compatible", env: ["LMSTUDIO_API_KEY"], options: { apiKey: "lm-studio", baseURL: "http://localhost:1234/v1" }, and an empty models map (commented examples present). Attention: models need actual LM Studio model IDs before use.
Documentation — providers
docs/docs/configure/providers.md
Added “LM Studio (Local)” section describing the provider.lmstudio config, startup/setup steps (run LM Studio, load model, query /v1/models), and notes about matching model IDs and baseURL/port.
Documentation — quickstart
docs/docs/getting-started/quickstart.md
Inserted LM Studio example in the “Configure Your LLM” quickstart, including a sample model entry and top-level model reference (lmstudio/<model-id>).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 I knocked on localhost at dawn,
A tiny server yawned and drawn,
Models wait with keys in hand,
I hop, configure, then I stand—
Local magic on my command. 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and clearly summarizes the main feature being added: LM Studio provider support for local models.
Description check ✅ Passed The description provides a clear summary, type of change, issue reference, and verification steps. However, it lacks explicit detail about test plan and does not confirm if CHANGELOG was updated.
Linked Issues check ✅ Passed The PR successfully implements the linked issue #339 objective by adding LM Studio as an OpenAI-compatible provider with required configuration in opencode.jsonc and documentation.
Out of Scope Changes check ✅ Passed All changes align with the PR objective: config addition, documentation updates, and examples. No extraneous modifications outside the scope of adding LM Studio provider support are present.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch config/lmstudio-provider

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
.opencode/opencode.jsonc (1)

10-10: Consider making baseURL configurable via environment variable.

The empty env array means the LM Studio port and host are hardcoded to localhost:1234. While this is reasonable for the default setup, users running LM Studio on different ports or hosts would need to modify the config file directly.

Consider documenting that users can override this by modifying the config, or optionally support an environment variable like LMSTUDIO_BASE_URL for easier customization.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.opencode/opencode.jsonc at line 10, The config currently has an empty "env"
array which forces a hardcoded LM Studio host/port; add support for an
environment variable (e.g., LMSTUDIO_BASE_URL) by adding an entry to the "env"
array and update the code that reads baseURL (where baseURL is currently sourced
from the static config) to fall back to process.env.LMSTUDIO_BASE_URL if
present, documenting the new variable in the config comment; reference the "env"
array in .opencode/opencode.jsonc and the code paths that construct/consume
baseURL (search for variables named baseURL or functions that initialize the LM
Studio client) to implement this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.opencode/opencode.jsonc:
- Around line 19-22: The current .opencode/opencode.jsonc "limit" block sets
"context" to 32768 and "output" to 4096; update these numeric values to match
Qwen2.5 capabilities by setting "context": 131072 and "output": 8192 in the
"limit" object (if you plan to deploy smaller 0.5B–3B variants, keep "output":
8192 but reduce "context" back to 32768); modify the "limit" object's "context"
and "output" fields accordingly.

---

Nitpick comments:
In @.opencode/opencode.jsonc:
- Line 10: The config currently has an empty "env" array which forces a
hardcoded LM Studio host/port; add support for an environment variable (e.g.,
LMSTUDIO_BASE_URL) by adding an entry to the "env" array and update the code
that reads baseURL (where baseURL is currently sourced from the static config)
to fall back to process.env.LMSTUDIO_BASE_URL if present, documenting the new
variable in the config comment; reference the "env" array in
.opencode/opencode.jsonc and the code paths that construct/consume baseURL
(search for variables named baseURL or functions that initialize the LM Studio
client) to implement this change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: fa6f723e-b8bc-4828-9a43-c8f755b4b0ea

📥 Commits

Reviewing files that changed from the base of the PR and between 096008a and 8ef2660.

📒 Files selected for processing (1)
  • .opencode/opencode.jsonc

- Port: 11434 (not default 1234)
- Models: `gpt-oss:20b` and `deepseek-r1:70b` (actual loaded models)
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.opencode/opencode.jsonc:
- Line 13: The baseURL in .opencode/opencode.jsonc is pointing to Ollama's port
(http://localhost:11434/v1); update the "baseURL" value to use LM Studio's
default port by changing the URL to http://localhost:1234/v1 so the
OpenAI-compatible endpoint is correct.
- Around line 15-31: The models section is missing the documented key used
elsewhere; add or rename an entry so "lmstudio/qwen" exists in the "models" map
(or update callers to use one of the existing keys like "gpt-oss:20b" or
"deepseek-r1:70b"); specifically, add a "lmstudio/qwen" model object with the
same structure (name, tool_call, limit) or change references to match the
existing keys to ensure callers referencing "lmstudio/qwen" resolve correctly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: a7b7735a-9300-48a3-bb6c-daf544578847

📥 Commits

Reviewing files that changed from the base of the PR and between 8ef2660 and ba4d00f.

📒 Files selected for processing (1)
  • .opencode/opencode.jsonc

Comment on lines +15 to +31
"models": {
"gpt-oss:20b": {
"name": "GPT-OSS 20B (LM Studio)",
"tool_call": true,
"limit": {
"context": 32768,
"output": 4096,
},
},
"deepseek-r1:70b": {
"name": "DeepSeek R1 70B (LM Studio)",
"tool_call": true,
"limit": {
"context": 32768,
"output": 4096,
},
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Configured model keys don’t satisfy the documented lmstudio/qwen usage.

The PR objective/examples call out lmstudio/qwen, but Lines 15-31 only define gpt-oss:20b and deepseek-r1:70b. Users following the documented key won’t find a matching model entry.

Suggested config adjustment
       "models": {
+        "qwen": {
+          "name": "Qwen (LM Studio)",
+          "tool_call": true,
+          "limit": {
+            "context": 32768,
+            "output": 4096
+          }
+        },
         "gpt-oss:20b": {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.opencode/opencode.jsonc around lines 15 - 31, The models section is missing
the documented key used elsewhere; add or rename an entry so "lmstudio/qwen"
exists in the "models" map (or update callers to use one of the existing keys
like "gpt-oss:20b" or "deepseek-r1:70b"); specifically, add a "lmstudio/qwen"
model object with the same structure (name, tool_call, limit) or change
references to match the existing keys to ensure callers referencing
"lmstudio/qwen" resolve correctly.

- Use LM Studio default port 1234 (not Ollama's 11434)
- Replace hardcoded model IDs with commented examples users fill in
- Add `LMSTUDIO_API_KEY` env var support
- Add LM Studio section to providers.md docs
- Add LM Studio tab to quickstart.md provider examples
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: add LM Studio provider configuration for local model support

1 participant