Skip to content

providers: add OpenAI compatible provider#185

Open
saem wants to merge 30 commits intorohitg00:mainfrom
saem:openai-compatible-providers
Open

providers: add OpenAI compatible provider#185
saem wants to merge 30 commits intorohitg00:mainfrom
saem:openai-compatible-providers

Conversation

@saem
Copy link
Copy Markdown

@saem saem commented Apr 22, 2026

Summary

Created OpenAI compatible model and embedding providers.

Details

The new provider is compatible with any OpenAI API endpoint. This includes OpenRouter, which has been subsumed as part of this change.

Detection has been added for:

  • OpenAI
  • LM Studio
  • ollama
  • vllm

This should allow for easy local model detection and usage.


  • undo the consolidation of OpenRouter

Notes for Reviewers:

  • I only have LM Studio setup so testing that locally
  • full disclosure, this was developed via Gemini
  • gemini went with two style changes that I'm not sure about:
    1. usage of mocks rather than process env vars + clean-up
    2. usage of constructor parameters for member declaration

Summary by CodeRabbit

  • New Features

    • Added support for OpenAI, LM Studio, Ollama, and vLLM as LLM and embedding providers; new local/default base URL and model handling.
    • Introduced a new provider implementation for OpenAI-compatible backends with configurable timeouts and headers.
  • Documentation

    • Updated README and .env example to document new providers, embedding config, and a minor wording tweak.
  • Tests

    • Expanded tests for provider detection, embedding behavior, and request handling; adjusted filesystem watcher timing.

Summary
=======

Created OpenAI compatible model and embedding providers.

Details
=====

The new provider is compatible with any OpenAI API endpoint. This
includes OpenRouter, which has been subsumed as part of this change.

Detection has been added for:

- OpenAI
- LM Studio
- ollama
- vllm

This should allow for easy local model detection and usage.
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 22, 2026

@saem is attempting to deploy a commit to the rohitg00's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 22, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds support for OpenAI, Ollama, LM Studio, and vLLM as LLM and embedding backends; introduces an OpenAIProvider implementation; updates embedding provider wiring to accept keyless/local endpoints and configurable base URL/model; extends README and tests.

Changes

Cohort / File(s) Summary
Documentation
README.md
Refines PreToolUse wording and adds provider and embedding env examples for OPENAI_*, LMSTUDIO_*, OLLAMA_*, VLLM_* and embedding variables.
Provider Configuration & Detection
src/config.ts
Adds env-based detection for openai, ollama, lmstudio, vllm for LLM and embedding providers; returns provider-specific { model, maxTokens, baseURL } and updates fallback whitelist.
Embedding Providers
src/providers/embedding/index.ts, src/providers/embedding/openai.ts
createEmbeddingProvider adds cases for ollama/lmstudio/vllm routed to the OpenAI embedding implementation; OpenAIEmbeddingProvider constructor now accepts optional baseUrl and model and omits Authorization for keyless ("no-key-required") providers.
LLM Provider Implementation & Factory
src/providers/openai.ts, src/providers/index.ts
Adds OpenAIProvider implementing MemoryProvider (POSTs to chat/completions); createBaseProvider updated to instantiate OpenAIProvider for openai, ollama, lmstudio, vllm with apiKey/baseURL handling and keyless defaults.
Types
src/types.ts
Extends ProviderType union with lmstudio, openai, ollama, vllm; removes duplicate source field from RetentionScore.
Tests
test/local-providers.test.ts, test/embedding-provider.test.ts, test/fs-watcher.test.ts
Adds tests for detection/instantiation across new backends, fetch mocks for OpenAI provider and embeddings, extends embedding detection tests, and increases fs-watcher delay (800ms → 1500ms).

Sequence Diagram(s)

sequenceDiagram
  participant App as App/Code
  participant Config as Config Loader
  participant Factory as Provider Factory
  participant Provider as OpenAIProvider / EmbeddingProvider
  participant API as External LLM/Embedding API

  App->>Config: loadConfig()
  Config->>Factory: detect provider via env vars
  Factory->>Provider: instantiate(name, apiKey/baseURL/model, maxTokens)
  Provider->>API: POST /v1/chat/completions or /v1/embeddings
  API-->>Provider: response (success or error)
  Provider-->>App: result or error
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐇 I hopped through env vars, sniffed each URL and key,
Found OpenAI, Ollama, LM Studio, vLLM for me!
I stitched models, keys, and baseURLs with care,
Sent embeddings and chat calls out on the air.
A carrot for code, a twitch of my ear! 🥕

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'providers: add OpenAI compatible provider' clearly and concisely summarizes the main change: adding support for OpenAI-compatible providers across the codebase.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@saem
Copy link
Copy Markdown
Author

saem commented Apr 22, 2026

FYI, I'm:

  • getting rid of the mocks and increasing test covering
  • making sure I have LM Studio working end to end for me

I would like to know if the general idea of the PR (an openai router + detection for various local runtimes) is something that the project would like to see developed? I haven't spent much time on it, so it's fine if you're not interested.

also consolidated embedding provider tests
Comment thread src/providers/embedding/openrouter.ts
@saem
Copy link
Copy Markdown
Author

saem commented Apr 22, 2026

@rohitg00 apologies for the ping, just wondering if you're ok with this PR in spirit? If so I'll keep going otherwise I'll close it down.

@rohitg00
Copy link
Copy Markdown
Owner

I think, This is already addressed in today's release by someone.

@saem
Copy link
Copy Markdown
Author

saem commented Apr 24, 2026

I think, This is already addressed in today's release by someone.

If you're thinking of #186 that was only the openai embedding provider

saem added 6 commits April 23, 2026 21:12
Also add a test because `OpenRouterEmbeddingProvider` was dropped from
the export but not detected.
@saem saem marked this pull request as ready for review April 24, 2026 04:43
@saem
Copy link
Copy Markdown
Author

saem commented Apr 24, 2026

I've cleaned things up, I still think this is useful.

Please take a look and let me know.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/config.ts (1)

125-137: ⚠️ Potential issue | 🟡 Minor

Warning text is stale — doesn't mention the new env vars.

The "No LLM provider key found" message still only names ANTHROPIC_API_KEY, GEMINI_API_KEY, OPENROUTER_API_KEY, MINIMAX_API_KEY. After this PR, users can also satisfy detection with OPENAI_API_KEY, OLLAMA_BASE_URL/OLLAMA_MODEL, LMSTUDIO_BASE_URL/LMSTUDIO_MODEL, or VLLM_BASE_URL/VLLM_MODEL. Including them prevents user confusion when troubleshooting.

📝 Proposed edit
-    "[agentmemory] No LLM provider key found " +
-      "(ANTHROPIC_API_KEY, GEMINI_API_KEY, OPENROUTER_API_KEY, MINIMAX_API_KEY). " +
+    "[agentmemory] No LLM provider key or local runtime found " +
+      "(ANTHROPIC_API_KEY, GEMINI_API_KEY, OPENROUTER_API_KEY, MINIMAX_API_KEY, " +
+      "OPENAI_API_KEY, OLLAMA_BASE_URL/OLLAMA_MODEL, LMSTUDIO_BASE_URL/LMSTUDIO_MODEL, " +
+      "VLLM_BASE_URL/VLLM_MODEL). " +
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config.ts` around lines 125 - 137, The warning printed when allowAgentSdk
is false uses an outdated list of env vars; update the message emitted by
process.stderr.write in the allowAgentSdk block to include the new supported
detection variables (OPENAI_API_KEY, OLLAMA_BASE_URL/OLLAMA_MODEL,
LMSTUDIO_BASE_URL/LMSTUDIO_MODEL, VLLM_BASE_URL/VLLM_MODEL) along with the
existing ANTHROPIC/GEMINI/OPENROUTER/MINIMAX vars so users see the full set of
keys that enable LLM-backed compression in the code path around the
allowAgentSdk check.
README.md (1)

804-819: ⚠️ Potential issue | 🟡 Minor

Env example missing the new Ollama/vLLM variables.

The provider table on L787–788 and the embedding detection in src/config.ts support OLLAMA_BASE_URL, VLLM_BASE_URL, VLLM_MODEL, VLLM_EMBEDDING_BASE_URL, and VLLM_EMBEDDING_MODEL, but none appear in the .env template. Consider adding them for discoverability.

📝 Suggested additions
 # OPENAI_API_KEY=sk-...
+# OLLAMA_BASE_URL=http://localhost:11434
 # OLLAMA_MODEL=llama3
 # LMSTUDIO_BASE_URL=http://localhost:1234
+# VLLM_BASE_URL=http://localhost:8000
+# VLLM_MODEL=local-model
@@
 # OLLAMA_EMBEDDING_MODEL=nomic-embed-text
 # LMSTUDIO_EMBEDDING_BASE_URL=http://localhost:1234/v1/embeddings
+# VLLM_EMBEDDING_BASE_URL=http://localhost:8000
+# VLLM_EMBEDDING_MODEL=...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 804 - 819, Add the missing Ollama/vLLM environment
variables to the .env template so they match the provider table and embedding
detection logic: include OLLAMA_BASE_URL, VLLM_BASE_URL, VLLM_MODEL,
VLLM_EMBEDDING_BASE_URL, and VLLM_EMBEDDING_MODEL with example values and brief
comments; ensure names exactly match what's referenced by the provider table and
the embedding detection in src/config.ts so auto-detection works as intended.
🧹 Nitpick comments (4)
src/providers/index.ts (1)

63-94: Consider collapsing the four OpenAI-compatible branches.

The openai, ollama, vllm, and lmstudio cases differ only in default baseURL and whether OPENAI_API_KEY is honored. A small lookup table would reduce duplication and make future additions a one-line change.

♻️ Sketch
const OPENAI_COMPATIBLE: Record<string, { defaultBaseUrl?: string; apiKey: () => string }> = {
  openai:   { defaultBaseUrl: "https://api.openai.com",
              apiKey: () => getEnvVar("OPENAI_API_KEY") || "no-key-required" },
  ollama:   { defaultBaseUrl: "http://localhost:11434", apiKey: () => "no-key-required" },
  lmstudio: { defaultBaseUrl: "http://localhost:1234",  apiKey: () => "no-key-required" },
  vllm:     { apiKey: () => "no-key-required" }, // base URL required
};

const compat = OPENAI_COMPATIBLE[config.provider];
if (compat) {
  const baseUrl = config.baseURL || compat.defaultBaseUrl;
  if (!baseUrl) throw new Error(`${config.provider.toUpperCase()}_BASE_URL is required`);
  return new OpenAIProvider(config.provider, compat.apiKey(), config.model, config.maxTokens, baseUrl);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/index.ts` around lines 63 - 94, Replace the four near-duplicate
case branches for "openai", "ollama", "vllm", and "lmstudio" by introducing a
small lookup (e.g., OPENAI_COMPATIBLE) keyed by config.provider that maps to {
defaultBaseUrl?: string; apiKey: () => string }; then lookup compat =
OPENAI_COMPATIBLE[config.provider], derive baseUrl = config.baseURL ||
compat.defaultBaseUrl, throw an error if baseUrl is required but missing (for
vllm), and call new OpenAIProvider(config.provider, compat.apiKey(),
config.model, config.maxTokens, baseUrl) — reusing getEnvVar("OPENAI_API_KEY")
only for the openai entry.
src/providers/embedding/openai.ts (1)

53-78: Prefer an explicit "no auth" signal over the "no-key-required" sentinel string.

The "no-key-required" string has to be matched exactly in two places (here and at every caller in src/providers/embedding/index.ts), and the current if (!this.apiKey) throw check becomes easy to misread (the error message says OPENAI_API_KEY is required even though the sentinel is how callers skip auth). A null apiKey, or a small enum/flag like { auth: "none" | "bearer" }, would be harder to typo and clearer to read.

♻️ Sketch
-  constructor(apiKey?: string, baseUrl?: string, model?: string) {
-    this.apiKey = apiKey || getEnvVar("OPENAI_API_KEY") || "";
-    if (!this.apiKey) throw new Error("OPENAI_API_KEY is required");
+  constructor(apiKey?: string | null, baseUrl?: string, model?: string) {
+    // null ⇒ caller explicitly opted out of auth (local OpenAI-compatible backends)
+    const resolved = apiKey === null ? null : apiKey || getEnvVar("OPENAI_API_KEY") || "";
+    if (resolved === "") throw new Error("OPENAI_API_KEY is required");
+    this.apiKey = resolved;
@@
-        ...(this.apiKey && this.apiKey !== "no-key-required"
-          ? { Authorization: `Bearer ${this.apiKey}` }
-          : {}),
+        ...(this.apiKey ? { Authorization: `Bearer ${this.apiKey}` } : {}),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/embedding/openai.ts` around lines 53 - 78, The code uses the
sentinel string "no-key-required" in the OpenAI provider causing confusing
checks; refactor the constructor and embedBatch to use an explicit auth mode
(e.g., a new parameter or property like authMode: "bearer" | "none" or make
apiKey nullable) instead of relying on the literal string, update the
constructor (constructor(apiKey?: string, baseUrl?: string, model?: string) and
the apiKey handling) to accept and store the explicit flag (and change the
thrown error message to reflect when authMode==="bearer" and no key provided),
and change embedBatch when building headers to add Authorization only when
authMode === "bearer" (or apiKey !== null if you choose nullable) so callers no
longer depend on the "no-key-required" sentinel; ensure corresponding callers in
src/providers/embedding/index.ts are updated to pass the explicit flag/nullable
apiKey as well.
test/fs-watcher.test.ts (1)

52-52: Consider a polling helper instead of a larger fixed sleep.

Bumping to 1500ms masks intermittency but still leaves the test timing-sensitive, and the other wait(800)/wait(900) cases remain on the old budget. A small poll-until-captured helper would remove flakes without blocking for ~1.5s every run.

♻️ Example helper
async function waitUntil<T>(
  fn: () => T | undefined,
  { timeoutMs = 2000, stepMs = 50 }: { timeoutMs?: number; stepMs?: number } = {},
): Promise<T> {
  const deadline = Date.now() + timeoutMs;
  while (Date.now() < deadline) {
    const v = fn();
    if (v) return v;
    await new Promise((r) => setTimeout(r, stepMs));
  }
  throw new Error("waitUntil: timed out");
}

// usage:
await waitUntil(() => (captured.length ? captured : undefined));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/fs-watcher.test.ts` at line 52, The test currently uses a fixed long
sleep (await wait(1500)) which makes it slow and still timing-sensitive; replace
that with a polling helper (e.g., add a waitUntil helper) and use it to poll for
the expected condition instead of wait(1500). Implement waitUntil(fn,
{timeoutMs?, stepMs?}) that repeatedly calls the provided lambda (e.g., () =>
captured.length ? captured : undefined) until it returns a truthy value or times
out, then swap the await wait(1500) in the test for await waitUntil(() =>
(captured.length ? captured : undefined), {timeoutMs: 2000, stepMs: 50}) so
other waits like wait(800)/wait(900) aren’t masking flakiness; reference the
test harness symbols wait and captured to locate where to add and use waitUntil.
src/providers/openai.ts (1)

17-23: Add a clarifying comment to document why compress and summarize are identical.

The MemoryProvider interface defines both methods with identical signatures, and all provider implementations (OpenAI, OpenRouter, Minimax, Anthropic, etc.) follow the same pattern. A brief comment explaining that this is intentional—to satisfy the interface requirement while delegating the actual differentiation to callers—would help future maintainers understand this is not a copy-paste mistake.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/openai.ts` around lines 17 - 23, The compress and summarize
methods in the OpenAI provider are intentionally identical to satisfy the
MemoryProvider interface and defer behavioral differences to callers; add a
brief clarifying comment above the two methods (referencing compress and
summarize in src/providers/openai.ts) stating that the duplication is
intentional to implement the interface and that callers are expected to
differentiate usage, mirroring the pattern used by other providers (OpenRouter,
Minimax, Anthropic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Line 806: Update the README example for LMSTUDIO_BASE_URL so it does not
include the path that OpenAIProvider appends; change the example value from
including "/v1/chat/completions" to only the host and port (e.g.,
LMSTUDIO_BASE_URL=http://localhost:1234) so OpenAIProvider's appended
"/v1/chat/completions" produces a correct final URL.

In `@src/config.ts`:
- Around line 87-94: The current LLM provider branch uses
hasRealValue(env["OPENAI_API_KEY"]) || hasRealValue(env["OPENAI_BASE_URL"])
which lets OPENAI_BASE_URL alone select provider: "openai"; change the condition
to require an API key (hasRealValue(env["OPENAI_API_KEY"])) before returning the
openai LLM config so a bare BASE_URL won't force LLM selection, and keep
OPENAI_BASE_URL available for embedding/proxy configuration elsewhere (or
introduce distinct env vars if you prefer separate LLM vs embedding proxies);
update the conditional around provider: "openai" (and its model/env usage like
OPENAI_MODEL) accordingly.

In `@src/providers/embedding/index.ts`:
- Around line 49-60: The lmstudio and vllm cases call OpenAIEmbeddingProvider
with potentially undefined baseUrl/model so the provider falls back to OpenAI
defaults; update the lmstudio and vllm branches to validate
getEnvVar("LMSTUDIO_EMBEDDING_BASE_URL") / getEnvVar("LMSTUDIO_EMBEDDING_MODEL")
and getEnvVar("VLLM_EMBEDDING_BASE_URL") / getEnvVar("VLLM_EMBEDDING_MODEL")
respectively, and throw a clear, failing error (or return a rejected result) if
either value is missing instead of instantiating OpenAIEmbeddingProvider;
reference OpenAIEmbeddingProvider and the getEnvVar keys to locate where to add
the checks and error messages.
- Around line 43-48: The Ollama branch in the embedding provider returns an
OpenAIEmbeddingProvider using "llama3" as the default model which is a chat
model and will fail at embedBatch; update the default argument in the case
"ollama" instantiation (where new OpenAIEmbeddingProvider is called and
getEnvVar("OLLAMA_EMBEDDING_MODEL") is used) to "nomic-embed-text" so the
provider uses a valid Ollama embedding model when the env var is not set.

In `@src/providers/index.ts`:
- Around line 79-86: The vllm branch is using a non-null assertion for
config.baseURL when calling new OpenAIProvider("vllm", ..., config.baseURL!)
even though detectProvider (in detectProvider) may leave baseURL undefined;
change this to validate and throw if baseURL is missing: in the vllm case in
providers/index.ts (where OpenAIProvider is constructed) check config.baseURL
(or call a small helper) and throw a clear error requiring VLLM_BASE_URL if it's
undefined, or alternatively require detectProvider to set baseURL; ensure the
error references vllm and VLLM_BASE_URL so users know what to set before
constructing OpenAIProvider.

In `@src/providers/openai.ts`:
- Line 29: The URL concatenation for the OpenAI endpoint (const url =
`${this.baseUrl}/v1/chat/completions`) can produce malformed endpoints when
this.baseUrl contains a trailing slash or already includes /v1; update the logic
that builds or validates the base URL (e.g., in the constructor or where url is
computed) to normalize and/or validate this.baseUrl by trimming any trailing
slash and removing a trailing "/v1" if present (or otherwise ensure exactly one
"/v1" is appended), then build the final endpoint from the normalized base;
reference this.baseUrl and the url construction in src/providers/openai.ts when
implementing the fix.
- Around line 30-47: The fetch call in openai.ts currently lacks an AbortSignal
timeout and can hang indefinitely; update the class constructor to accept a new
timeoutMs parameter (default 60_000) and store it on this.timeoutMs, then pass
signal: AbortSignal.timeout(this.timeoutMs) into the fetch options where the
POST is made (the same location that sets method/headers/body) so the request
will abort after the configured timeout; ensure any instantiations of the class
use the default when not provided and that references like this.timeoutMs,
AbortSignal.timeout, and the fetch call are updated accordingly.

In `@test/embedding-provider.test.ts`:
- Around line 11-24: The beforeEach setup in embedding-provider.test.ts doesn't
reset VLLM env vars used by the detection logic; update the beforeEach block
(where process.env is reset and keys like GEMINI_API_KEY, OPENAI_API_KEY, etc.
are set to "") to also set VLLM_EMBEDDING_BASE_URL and VLLM_EMBEDDING_MODEL to
empty strings so the test "returns null when no API keys are set" is fully
isolated from contributors' local shells.

In `@test/local-providers.test.ts`:
- Around line 118-124: The test overwrites global.fetch with mockFetch
(global.fetch = mockFetch) and never restores it, leaking the mock to other
tests; replace the direct assignment with vi.stubGlobal("fetch", mockFetch) and
ensure you call vi.unstubAllGlobals() in an afterEach block (or save and restore
the original fetch) so the mock is cleaned up; apply the same change for the
OpenAIEmbeddingProvider test suite where global.fetch is set (refer to mockFetch
and the suite around the OpenAIEmbeddingProvider tests) to prevent global state
leakage.

---

Outside diff comments:
In `@README.md`:
- Around line 804-819: Add the missing Ollama/vLLM environment variables to the
.env template so they match the provider table and embedding detection logic:
include OLLAMA_BASE_URL, VLLM_BASE_URL, VLLM_MODEL, VLLM_EMBEDDING_BASE_URL, and
VLLM_EMBEDDING_MODEL with example values and brief comments; ensure names
exactly match what's referenced by the provider table and the embedding
detection in src/config.ts so auto-detection works as intended.

In `@src/config.ts`:
- Around line 125-137: The warning printed when allowAgentSdk is false uses an
outdated list of env vars; update the message emitted by process.stderr.write in
the allowAgentSdk block to include the new supported detection variables
(OPENAI_API_KEY, OLLAMA_BASE_URL/OLLAMA_MODEL, LMSTUDIO_BASE_URL/LMSTUDIO_MODEL,
VLLM_BASE_URL/VLLM_MODEL) along with the existing
ANTHROPIC/GEMINI/OPENROUTER/MINIMAX vars so users see the full set of keys that
enable LLM-backed compression in the code path around the allowAgentSdk check.

---

Nitpick comments:
In `@src/providers/embedding/openai.ts`:
- Around line 53-78: The code uses the sentinel string "no-key-required" in the
OpenAI provider causing confusing checks; refactor the constructor and
embedBatch to use an explicit auth mode (e.g., a new parameter or property like
authMode: "bearer" | "none" or make apiKey nullable) instead of relying on the
literal string, update the constructor (constructor(apiKey?: string, baseUrl?:
string, model?: string) and the apiKey handling) to accept and store the
explicit flag (and change the thrown error message to reflect when
authMode==="bearer" and no key provided), and change embedBatch when building
headers to add Authorization only when authMode === "bearer" (or apiKey !== null
if you choose nullable) so callers no longer depend on the "no-key-required"
sentinel; ensure corresponding callers in src/providers/embedding/index.ts are
updated to pass the explicit flag/nullable apiKey as well.

In `@src/providers/index.ts`:
- Around line 63-94: Replace the four near-duplicate case branches for "openai",
"ollama", "vllm", and "lmstudio" by introducing a small lookup (e.g.,
OPENAI_COMPATIBLE) keyed by config.provider that maps to { defaultBaseUrl?:
string; apiKey: () => string }; then lookup compat =
OPENAI_COMPATIBLE[config.provider], derive baseUrl = config.baseURL ||
compat.defaultBaseUrl, throw an error if baseUrl is required but missing (for
vllm), and call new OpenAIProvider(config.provider, compat.apiKey(),
config.model, config.maxTokens, baseUrl) — reusing getEnvVar("OPENAI_API_KEY")
only for the openai entry.

In `@src/providers/openai.ts`:
- Around line 17-23: The compress and summarize methods in the OpenAI provider
are intentionally identical to satisfy the MemoryProvider interface and defer
behavioral differences to callers; add a brief clarifying comment above the two
methods (referencing compress and summarize in src/providers/openai.ts) stating
that the duplication is intentional to implement the interface and that callers
are expected to differentiate usage, mirroring the pattern used by other
providers (OpenRouter, Minimax, Anthropic).

In `@test/fs-watcher.test.ts`:
- Line 52: The test currently uses a fixed long sleep (await wait(1500)) which
makes it slow and still timing-sensitive; replace that with a polling helper
(e.g., add a waitUntil helper) and use it to poll for the expected condition
instead of wait(1500). Implement waitUntil(fn, {timeoutMs?, stepMs?}) that
repeatedly calls the provided lambda (e.g., () => captured.length ? captured :
undefined) until it returns a truthy value or times out, then swap the await
wait(1500) in the test for await waitUntil(() => (captured.length ? captured :
undefined), {timeoutMs: 2000, stepMs: 50}) so other waits like
wait(800)/wait(900) aren’t masking flakiness; reference the test harness symbols
wait and captured to locate where to add and use waitUntil.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a5718af1-6dfc-496d-98c2-04768cda0f15

📥 Commits

Reviewing files that changed from the base of the PR and between 196cbd6 and 13da1a1.

📒 Files selected for processing (10)
  • README.md
  • src/config.ts
  • src/providers/embedding/index.ts
  • src/providers/embedding/openai.ts
  • src/providers/index.ts
  • src/providers/openai.ts
  • src/types.ts
  • test/embedding-provider.test.ts
  • test/fs-watcher.test.ts
  • test/local-providers.test.ts

Comment thread README.md Outdated
Comment thread src/config.ts Outdated
Comment thread src/providers/embedding/index.ts
Comment thread src/providers/embedding/index.ts Outdated
Comment thread src/providers/index.ts
Comment thread src/providers/openai.ts Outdated
Comment thread src/providers/openai.ts
Comment thread test/embedding-provider.test.ts
Comment thread test/local-providers.test.ts Outdated
saem and others added 5 commits April 23, 2026 22:08
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/providers/openai.ts (2)

14-14: extraHeaders is unreachable from ProviderConfig — plumb it through or drop it.

Per src/types.ts:124-130, ProviderConfig has no extraHeaders field, and createBaseProvider in src/providers/index.ts:61-94 never supplies the 6th positional arg. That means the parameter is effectively dead code for every production code path, even though the PR description explicitly calls out OpenRouter support — which in practice requires HTTP-Referer / X-Title headers for attribution and rate-limit tiering.

Either extend ProviderConfig with an optional extraHeaders?: Record<string, string> and forward it from each case in createBaseProvider, or remove the unused parameter until there's a consumer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/openai.ts` at line 14, The constructor/property extraHeaders in
OpenAIProvider (extraHeaders: Record<string,string>) is never set because
ProviderConfig lacks extraHeaders and createBaseProvider doesn't pass a sixth
argument; either wire it through or remove it. Add optional extraHeaders?:
Record<string,string> to ProviderConfig and update createBaseProvider's case
branches to forward that field into the OpenAIProvider (and any other provider
constructors that accept it), ensuring the OpenAIProvider constructor/property
uses that value; alternatively, delete the unused extraHeaders member from
src/providers/openai.ts if you decide not to support propagated headers yet.

36-42: Header spread order lets extraHeaders silently override Authorization/Content-Type.

Because ...this.extraHeaders is spread last, a caller-supplied header with the same casing can clobber the bearer token or content type. For an internal-only plumbing this is minor, but if/when extraHeaders is surfaced through ProviderConfig (see other comment), consider moving ...this.extraHeaders before the fixed headers so authentication and content negotiation can't be accidentally overwritten from config.

♻️ Proposed reordering
       headers: {
+        ...this.extraHeaders,
         "Content-Type": "application/json",
         ...(this.apiKey && this.apiKey !== "no-key-required"
           ? { Authorization: `Bearer ${this.apiKey}` }
           : {}),
-        ...this.extraHeaders,
       },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/openai.ts` around lines 36 - 42, The headers spread currently
places ...this.extraHeaders last which allows caller-supplied values to override
required headers; in the OpenAI provider header construction (the object that
includes "Content-Type" and the conditional Authorization Bearer based on
this.apiKey) move ...this.extraHeaders before the fixed headers so Content-Type
and Authorization are applied last (e.g., spread extraHeaders first, then
"Content-Type", then the conditional Authorization) to prevent accidental
clobbering; update the header object used in the OpenAI request construction
(the headers literal in src/providers/openai.ts) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/providers/openai.ts`:
- Around line 43-50: The OpenAI provider is sending max_tokens in the request
body (see the JSON body construction using this.model and this.maxTokens) but
OpenAI reasoning models (o1-preview, o1-mini, o3, o3-mini) require
max_completion_tokens; update documentation and/or code: add a README note
calling out this limitation for OpenAI reasoning models, and optionally modify
the request construction in the OpenAI provider (the JSON body that builds
messages/model/max_tokens) to detect reasoning-model names via this.model and
switch to using max_completion_tokens instead of max_tokens when a reasoning
model is used.

---

Nitpick comments:
In `@src/providers/openai.ts`:
- Line 14: The constructor/property extraHeaders in OpenAIProvider
(extraHeaders: Record<string,string>) is never set because ProviderConfig lacks
extraHeaders and createBaseProvider doesn't pass a sixth argument; either wire
it through or remove it. Add optional extraHeaders?: Record<string,string> to
ProviderConfig and update createBaseProvider's case branches to forward that
field into the OpenAIProvider (and any other provider constructors that accept
it), ensuring the OpenAIProvider constructor/property uses that value;
alternatively, delete the unused extraHeaders member from
src/providers/openai.ts if you decide not to support propagated headers yet.
- Around line 36-42: The headers spread currently places ...this.extraHeaders
last which allows caller-supplied values to override required headers; in the
OpenAI provider header construction (the object that includes "Content-Type" and
the conditional Authorization Bearer based on this.apiKey) move
...this.extraHeaders before the fixed headers so Content-Type and Authorization
are applied last (e.g., spread extraHeaders first, then "Content-Type", then the
conditional Authorization) to prevent accidental clobbering; update the header
object used in the OpenAI request construction (the headers literal in
src/providers/openai.ts) accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3d43f20c-ac03-4a70-82c4-c6b31b6d5794

📥 Commits

Reviewing files that changed from the base of the PR and between f95cd2c and 0e93384.

📒 Files selected for processing (1)
  • src/providers/openai.ts

Comment thread src/providers/openai.ts Outdated
@qodo-ai-reviewer
Copy link
Copy Markdown

Hi, detectEmbeddingProvider() returns "lmstudio"/"vllm" when only one of base URL or model is set, but createEmbeddingProvider() throws if required variables are missing. This can crash worker startup for partially-configured environments (including when only a base URL is set).

Severity: action required | Category: reliability

How to fix: Align detect/create requirements

Agent prompt to fix - you can give this to your LLM of choice:

Issue description

detectEmbeddingProvider() can return lmstudio/vllm with partial env configuration, but createEmbeddingProvider() throws unless required variables are present. This can crash startup.

Issue Context

  • Detection currently checks (BASE_URL || MODEL) for lmstudio/vllm.
  • Creation requires:
    • lmstudio: LMSTUDIO_EMBEDDING_MODEL (base has a default)
    • vllm: both VLLM_EMBEDDING_BASE_URL and VLLM_EMBEDDING_MODEL
  • src/index.ts calls createEmbeddingProvider() during startup.

Fix Focus Areas

  • src/config.ts[201-219]
  • src/providers/embedding/index.ts[30-64]

Suggested direction

Make detection require the same minimum inputs as creation (e.g., lmstudio: require MODEL; vllm: require both BASE_URL+MODEL), or make creation return null (or a clearer error/warning) when inputs are incomplete instead of throwing during startup.

We noticed a couple of other issues in this PR as well - happy to share if helpful.


Found by Qodo code review

Copy link
Copy Markdown
Owner

@rohitg00 rohitg00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this — local-provider support is a real unlock and the generic OpenAIProvider is clean. Few things to tune before merge:

Blockers

1. Provider detection order flips existing users

detectProvider now checks in this order:

minimax → lmstudio → ollama → vllm → openai → anthropic → gemini → openrouter

Any user with both ANTHROPIC_API_KEY and OPENAI_API_KEY set (common — lots of people keep both in ~/.env) silently flips from Anthropic to OpenAI after upgrading. That's a behavior change we can't ship without either a priority-preserving order OR a loud warning.

Fix: Move the four new detections after openrouter so legacy keys still win. Or introduce an explicit AGENTMEMORY_PROVIDER=openai override for deliberate selection and keep new detections behind it.

2. Hardcoded "gpt-4o" default

src/config.ts:

model: env["OPENAI_MODEL"] || "gpt-4o",

This violates the unopinionated-routing rule the rest of the codebase follows — agentmemory doesn't ship a model catalog. Other providers require the user to pick a model explicitly (or pick from *_MODEL env). A hardcoded "gpt-4o" bakes a pricing/capability assumption into the library that will age badly.

Fix: Require OPENAI_MODEL (throw with a clear error if unset) or use a neutral placeholder like "openai-default" and document it.

Needs cleanup (non-blocking but request before merge)

3. "no-key-required" sentinel string

src/providers/openai.ts + src/providers/embedding/openai.ts use the literal "no-key-required" to mean "skip Authorization header." That's fragile — any user who accidentally names their key no-key-required hits it, and the intent is hidden.

Fix: constructor(apiKey: string | null, ...) + if (this.apiKey) { headers.Authorization = ... }. Same intent, zero magic string.

4. Overlap with #186 already shipped in v0.9.2

OPENAI_BASE_URL + OPENAI_EMBEDDING_MODEL landed in v0.9.2 (#186). Your ctor-param additions extend that cleanly (arg ?? env ?? default) but please rebase on latest main to surface any unexpected drift, and add a ### Changed note to CHANGELOG explaining "OpenAIEmbeddingProvider constructor now accepts optional baseUrl + model args; env vars remain fallback."

5. Priority of OpenRouter vs. generic OpenAIProvider

OpenRouter is OpenAI-compatible — could be subsumed under OpenAIProvider with baseURL: "https://openrouter.ai/api/v1". You flagged this yourself and kept the separate OpenRouterProvider. For now keeping both is fine; raise an issue to consolidate in a later release so we don't have two code paths for the same protocol.

Minor

  • feedback_original_naming style: PR adds VLLM_BASE_URL / LMSTUDIO_BASE_URL — brands in env names. Fine, matches the existing OPENAI_BASE_URL / COHERE_API_KEY pattern.
  • Tests using mocks vs env-var+cleanup (you flagged this) — either pattern is OK, keep what's there.
  • Constructor-param member declaration — non-issue, modern TS idiom.

Summary

Blockers (1) + (2) would silently change behavior for anyone already using agentmemory with Anthropic. Fix those and this ships. (3) is a 5-line cleanup. (4) is a rebase + one CHANGELOG line.

Happy to re-review as soon as those three are in.

saem added 3 commits April 24, 2026 18:54
### **Blockers (Resolved)**
   1.  **Provider Detection Order:** I reordered the detection logic in `src/config.ts` so that legacy keys
   (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`) are checked first. This prevents existing users from being
   silently switched to local providers upon upgrading.
   2.  **Explicit Override:** I implemented the `AGENTMEMORY_PROVIDER` environment variable, which allows
   users to deliberately select a provider regardless of detection priority.
   3.  **Removed Hardcoded `"gpt-4o"`:** The default OpenAI model is now `"openai-default"`, removing the
   capability assumption and requiring users to be explicit with `OPENAI_MODEL` if they want a specific
   OpenAI model.

   ### **Cleanup (Resolved)**
   1.  **Removed Sentinel Strings:** I eliminated the magic `"no-key-required"` string. The constructors for
   `OpenAIProvider` and `OpenAIEmbeddingProvider` now accept `string | null`, and the `Authorization` header
   is only attached if a key is present.
   2.  **CHANGELOG and Rebase:** I added the required notes to the `CHANGELOG.md` under `## [Unreleased]`,
   specifically documenting the constructor changes for `OpenAIEmbeddingProvider`.

   The only item **not addressed** is the consolidation of `OpenRouter` into `OpenAIProvider`. As the
   maintainer noted this was non-blocking and could be handled in a later release, I've left both code paths
   in place to minimize risk for this merge.

   All 852 tests, including the new detection tests, are passing.
@saem
Copy link
Copy Markdown
Author

saem commented Apr 25, 2026

I think I've addressed all the code feedback, including creating the issue (#199)

@saem
Copy link
Copy Markdown
Author

saem commented Apr 25, 2026

There is a minor issue with the openai provider having a default model approach, if openai switches to a reasoning model by default we won't be able to detect it, but I think that's fine (deal with it when it happens).

Comment thread CHANGELOG.md Outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants