Skip to content

feat(onboard): add custom OpenAI-compatible provider option#624

Open
andy-ratsirarson wants to merge 1 commit intoNVIDIA:mainfrom
Tenafli:feat/custom-provider
Open

feat(onboard): add custom OpenAI-compatible provider option#624
andy-ratsirarson wants to merge 1 commit intoNVIDIA:mainfrom
Tenafli:feat/custom-provider

Conversation

@andy-ratsirarson
Copy link

@andy-ratsirarson andy-ratsirarson commented Mar 22, 2026

Summary

Add a "Custom OpenAI-compatible endpoint" option to the onboarding wizard, enabling users to bring any inference provider that exposes an OpenAI-compatible /v1/chat/completions endpoint.

Problem

NemoClaw currently supports NVIDIA Endpoint, Ollama, and vLLM as inference providers. Users who want to use other providers (Google Gemini, OpenRouter, Together AI, LiteLLM) have no onboarding path.

Additionally, non-NVIDIA endpoints may reject OpenAI-specific request parameters like store, resulting in 400 status code (no body) errors from the sandbox agent.

Solution

  • Add case "custom" to getProviderSelectionConfig() following the existing switch-case pattern for provider registration
  • Add interactive prompts for base URL, API key, and model name during onboarding
  • Support non-interactive mode via NEMOCLAW_CUSTOM_BASE_URL, NEMOCLAW_CUSTOM_API_KEY, and NEMOCLAW_MODEL
  • Set compat: { supportsStore: false } on the default inference model entry in openclaw.json to prevent strict endpoints from rejecting the store parameter. This is safe for all providers — NVIDIA and Ollama ignore the flag
  • Add custom profile to blueprint.yaml

The custom provider follows the same gateway-routed architecture as existing providers: the sandbox talks to inference.local, and the OpenShell gateway proxies to the user endpoint with credential injection and model rewriting.

Files changed

File Change
Dockerfile Add supportsStore: false compat flag to inference model entry
bin/lib/inference-config.js Add custom provider case
bin/lib/onboard.js Custom provider menu option, credential prompts, openshell provider/inference setup
nemoclaw-blueprint/blueprint.yaml Add custom inference profile
test/inference-config.test.js Tests for custom provider config
docs/inference/switch-inference-providers.md Document custom provider setup and non-interactive usage
README.md Add custom provider to inference table

Test plan

  • Custom provider with Google Gemini (gemini-2.5-flash) — end-to-end inference works
  • Local Ollama (llama3.2) — backward compatibility verified, supportsStore false has no effect
  • Unit tests for custom provider config (9/9 passing)
  • Non-interactive mode with all env vars
  • NVIDIA cloud backward compatibility

Summary by CodeRabbit

  • New Features

    • Support for custom OpenAI-compatible inference providers (supply base URL, API key, model).
    • Inference routing now goes through the OpenShell gateway proxy; dashboard shows “Custom Provider” label.
  • Documentation

    • Updated onboarding and provider docs with interactive and non-interactive (env var) flows for custom provider setup.
    • README clarifies inference routing and NVIDIA endpoint/API key wording.

@coderabbitai
Copy link

coderabbitai bot commented Mar 22, 2026

📝 Walkthrough

Walkthrough

Adds support for user-provided OpenAI-compatible inference endpoints across onboarding (interactive and non-interactive), CLI provider selection, runtime provider creation, docs, tests, and blueprint updates; getProviderSelectionConfig gains a custom branch; Dockerfile-generated model JSON now includes compat: { supportsStore: False }.

Changes

Cohort / File(s) Summary
Build / Configuration
Dockerfile, nemoclaw-blueprint/blueprint.yaml
Dockerfile: build-time generation of model JSON now includes compat: { supportsStore: False }. Blueprint: added profiles.custom and components.inference.profiles.custom (OpenAI-type custom-provider, blank endpoint/model, credential_env: "OPENAI_API_KEY").
Onboarding & CLI logic
bin/lib/onboard.js, bin/lib/inference-config.js
Added custom provider support: onboarding reads/validates/saves base URL, API key, and model (interactive and non-interactive envs), returns customCreds, creates/updates an OpenShell openai-type provider (custom-provider) and sets inference routing; getProviderSelectionConfig added "custom" case returning endpointType: "custom", endpointUrl: INFERENCE_ROUTE_URL, provider metadata, and defaults.
Documentation & README
README.md, docs/inference/switch-inference-providers.md
Documented “Custom OpenAI-compatible” providers, interactive prompts, non-interactive env vars (NEMOCLAW_PROVIDER=custom, NEMOCLAW_CUSTOM_BASE_URL, NEMOCLAW_CUSTOM_API_KEY, NEMOCLAW_MODEL), and runtime openshell provider create + openshell inference set --no-verify workflow.
Tests
test/inference-config.test.js, test/onboard-selection.test.js
Added tests covering getProviderSelectionConfig("custom", ...), expected fields (endpointType, endpointUrl, providerLabel, model), and URL validation behavior (http/https allowances, localhost-only http://, malformed URL rejection).
Docs / Readme minor
README.md
Updated inference description to note routing via gateway proxy and clarified NVIDIA API key wording; added provider table entry for Custom OpenAI-compatible option.

Sequence Diagram

sequenceDiagram
    actor User
    participant Onboard as "bin/lib/onboard.js"
    participant Cred as "Credential Store"
    participant Config as "bin/lib/inference-config.js"
    participant Shell as "OpenShell CLI/API"
    participant Blueprint as "Blueprint YAML"

    User->>Onboard: select "Custom OpenAI-compatible"
    activate Onboard

    alt Interactive
        Onboard->>User: prompt base URL
        User-->>Onboard: base URL
        Onboard->>Cred: save base URL
        Onboard->>Cred: lookup API key for base URL
        alt no saved key
            Onboard->>User: prompt API key
            User-->>Onboard: API key
            Onboard->>Cred: save API key
        end
        Onboard->>User: prompt model
        User-->>Onboard: model
    else Non-interactive
        Onboard->>Onboard: read NEMOCLAW_CUSTOM_* env vars and validate
        Onboard->>Cred: save base URL and API key
    end

    Onboard->>Config: build provider selection config (provider: custom, model)
    Config-->>Onboard: config
    Onboard->>Shell: provider create/update (type: openai, base URL, API key)
    Shell-->>Onboard: provider created/updated
    Onboard->>Shell: inference set --provider custom-provider --model <model> --no-verify
    Shell-->>Onboard: inference routing set
    Onboard->>Blueprint: update blueprint/profile with custom provider info
    Blueprint-->>Onboard: updated

    deactivate Onboard
    Onboard-->>User: custom provider ready
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hopped through prompts with keyboard bright,

A base URL, a key — set up by night.
Models now chatter through gateways I bring,
Custom endpoints hum; my whiskers sing.
A tiny rabbit cheers for routing's new spring.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely summarizes the main change: adding a custom OpenAI-compatible provider option to the onboarding wizard.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
nemoclaw-blueprint/blueprint.yaml (1)

57-63: Expose custom in the blueprint's declared profile list.

This adds components.inference.profiles.custom, but the top-level profiles: array still omits custom. Anything that enumerates declared profiles will miss the new provider.

YAML snippet
profiles:
  - default
  - ncp
  - nim-local
  - vllm
  - custom
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@nemoclaw-blueprint/blueprint.yaml` around lines 57 - 63, Add the new custom
provider to the blueprint's declared profiles by adding "custom" to the
top-level profiles array so it matches components.inference.profiles.custom;
update the profiles list (the YAML scalar sequence under the key profiles) to
include the entry custom alongside default, ncp, nim-local, and vllm.
test/inference-config.test.js (1)

59-76: Please cover the onboarding branch, not just the config mapper.

These cases only pin getProviderSelectionConfig(). The new env-var validation and credential persistence for custom live in bin/lib/onboard.js, and that path is still listed as pending in the PR objectives. A non-interactive setupNim() test with and without NEMOCLAW_CUSTOM_* would catch regressions there.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/inference-config.test.js` around lines 59 - 76, Add tests that exercise
the onboarding flow in bin/lib/onboard.js by invoking setupNim()
non-interactively to cover the onboarding branch (not just
getProviderSelectionConfig). Specifically, add two tests: one with
NEMOCLAW_CUSTOM_* env vars set and one without; for each, call setupNim() (or
the exported function that runs the onboarding path) and assert that env-var
validation behaves as expected and credentials are persisted/cleared
appropriately. Use the same test file pattern (test/inference-config.test.js)
and stub/mock any interactive prompts, file/system I/O, and process.env
mutations so the tests remain deterministic; reference setupNim,
getProviderSelectionConfig, and the NEMOCLAW_CUSTOM_* variables when locating
the code to exercise. Ensure tests assert both validation errors when env vars
are missing and successful credential persistence when present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bin/lib/onboard.js`:
- Around line 714-732: The onboarding flow currently reuses an existing
CUSTOM_PROVIDER_API_KEY even when the entered baseUrl (from prompt stored via
saveCredential("CUSTOM_PROVIDER_BASE_URL")) changes, causing a saved key to be
paired with the wrong endpoint; update the logic in this block to compare the
stored CUSTOM_PROVIDER_BASE_URL (via getCredential("CUSTOM_PROVIDER_BASE_URL"))
against the newly entered baseUrl and if they differ, treat the API key as
stale: prompt for a new API key (using prompt), overwrite
CUSTOM_PROVIDER_API_KEY with saveCredential("CUSTOM_PROVIDER_API_KEY", apiKey),
and then save the new CUSTOM_PROVIDER_BASE_URL; otherwise preserve the existing
behavior of using the saved key.

In `@README.md`:
- Around line 187-191: The README currently says "Get an API key from
build.nvidia.com" in the paragraph that also documents custom OpenAI-compatible
providers; update the text so that the NVIDIA key instruction is scoped to the
NVIDIA Endpoint option only. Locate the paragraph referencing "Get an API key
from build.nvidia.com" near the "Custom OpenAI-compatible" table and change it
to a qualified sentence such as "If using the NVIDIA endpoint, get an API key
from build.nvidia.com" or move that instruction under the NVIDIA Endpoint
subsection referenced by the "nemoclaw onboard" flow and the environment
variables (NEMOCLAW_PROVIDER, NEMOCLAW_CUSTOM_BASE_URL, NEMOCLAW_CUSTOM_API_KEY,
NEMOCLAW_MODEL) so users selecting a custom provider won’t be misled.

---

Nitpick comments:
In `@nemoclaw-blueprint/blueprint.yaml`:
- Around line 57-63: Add the new custom provider to the blueprint's declared
profiles by adding "custom" to the top-level profiles array so it matches
components.inference.profiles.custom; update the profiles list (the YAML scalar
sequence under the key profiles) to include the entry custom alongside default,
ncp, nim-local, and vllm.

In `@test/inference-config.test.js`:
- Around line 59-76: Add tests that exercise the onboarding flow in
bin/lib/onboard.js by invoking setupNim() non-interactively to cover the
onboarding branch (not just getProviderSelectionConfig). Specifically, add two
tests: one with NEMOCLAW_CUSTOM_* env vars set and one without; for each, call
setupNim() (or the exported function that runs the onboarding path) and assert
that env-var validation behaves as expected and credentials are
persisted/cleared appropriately. Use the same test file pattern
(test/inference-config.test.js) and stub/mock any interactive prompts,
file/system I/O, and process.env mutations so the tests remain deterministic;
reference setupNim, getProviderSelectionConfig, and the NEMOCLAW_CUSTOM_*
variables when locating the code to exercise. Ensure tests assert both
validation errors when env vars are missing and successful credential
persistence when present.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 1ed47649-c662-49b1-a651-cc98c4cd1610

📥 Commits

Reviewing files that changed from the base of the PR and between 04012f7 and 16cf3d4.

📒 Files selected for processing (6)
  • Dockerfile
  • README.md
  • bin/lib/inference-config.js
  • bin/lib/onboard.js
  • nemoclaw-blueprint/blueprint.yaml
  • test/inference-config.test.js

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
bin/lib/onboard.js (1)

714-732: ⚠️ Potential issue | 🟠 Major

Refresh the saved API key when the base URL changes.

This still reuses CUSTOM_PROVIDER_API_KEY whenever one exists. Switching the custom endpoint can therefore pair the new base URL with the old key and leave the provider misconfigured. Also, because the new base URL is saved first, aborting the replacement-key prompt can make that stale key look reusable on the next rerun.

Suggested fix
-        const baseUrl = await prompt("  Base URL: ");
+        const baseUrl = (await prompt("  Base URL: ")).trim();
         if (!baseUrl) {
           console.error("  Base URL is required.");
           process.exit(1);
         }
-        saveCredential("CUSTOM_PROVIDER_BASE_URL", baseUrl);
 
-        let apiKey = getCredential("CUSTOM_PROVIDER_API_KEY");
+        const previousBaseUrl = getCredential("CUSTOM_PROVIDER_BASE_URL");
+        let apiKey =
+          previousBaseUrl === baseUrl ? getCredential("CUSTOM_PROVIDER_API_KEY") : null;
         if (!apiKey) {
-          apiKey = await prompt("  API Key: ");
+          apiKey = (await prompt("  API Key: ")).trim();
           if (!apiKey) {
             console.error("  API key is required.");
             process.exit(1);
           }
           saveCredential("CUSTOM_PROVIDER_API_KEY", apiKey);
           console.log("  Key saved to ~/.nemoclaw/credentials.json");
         } else {
           console.log("  Using saved API key from credentials.");
         }
+        saveCredential("CUSTOM_PROVIDER_BASE_URL", baseUrl);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 714 - 732, The code saves
CUSTOM_PROVIDER_BASE_URL before handling the API key and always reuses
CUSTOM_PROVIDER_API_KEY if present, which can mispair a new base URL with an old
key; modify the flow in the onboarding logic (the
prompt/saveCredential/getCredential sequence) so you first read the existing
base URL via getCredential("CUSTOM_PROVIDER_BASE_URL") and if it differs from
the newly entered baseUrl, clear or remove the stored API key (e.g. call
saveCredential("CUSTOM_PROVIDER_API_KEY", null/empty or a delete function) or
force re-prompt) before attempting to
reuse/getCredential("CUSTOM_PROVIDER_API_KEY"); alternatively prompt the user to
confirm reuse of the saved key when base URLs differ, and only save the new base
URL via saveCredential("CUSTOM_PROVIDER_BASE_URL") after handling the API key
update.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bin/lib/onboard.js`:
- Around line 696-697: The current non-interactive flow persists CI secrets by
calling saveCredential("CUSTOM_PROVIDER_BASE_URL", baseUrl) and
saveCredential("CUSTOM_PROVIDER_API_KEY", apiKey); instead, modify setupNim() to
return the discovered custom { baseUrl, apiKey } when run non-interactively and
update setupInference() to accept an optional { baseUrl, apiKey } parameter so
the non-interactive caller passes the creds directly; keep saveCredential calls
only in the interactive branch (and remove or guard the saveCredential calls
currently invoked at the locations referenced by setupNim()/setupInference() and
the lines corresponding to 830-831) so pipeline secrets are never written to
disk.
- Around line 689-697: Validate and normalize the custom base URL before
persisting: replace the current direct checks around baseUrl/apiKey/model
(variables baseUrl, apiKey, model) and the calls to
saveCredential("CUSTOM_PROVIDER_BASE_URL", ...) with logic that constructs a URL
object (new URL(baseUrl)), canonicalizes it (e.g., remove trailing slash), and
enforces only secure endpoints — require protocol === "https:" or allow
loopback/localhost addresses explicitly (or require an explicit opt-in env var
to accept insecure http), otherwise console.error and process.exit(1); apply the
same validation/normalization to the other branch that saves
OPENAI_BASE_URL/OPENAI_API_KEY (the 714-719 area) so both paths use identical
URL parsing and rejection rules before calling saveCredential.

---

Duplicate comments:
In `@bin/lib/onboard.js`:
- Around line 714-732: The code saves CUSTOM_PROVIDER_BASE_URL before handling
the API key and always reuses CUSTOM_PROVIDER_API_KEY if present, which can
mispair a new base URL with an old key; modify the flow in the onboarding logic
(the prompt/saveCredential/getCredential sequence) so you first read the
existing base URL via getCredential("CUSTOM_PROVIDER_BASE_URL") and if it
differs from the newly entered baseUrl, clear or remove the stored API key (e.g.
call saveCredential("CUSTOM_PROVIDER_API_KEY", null/empty or a delete function)
or force re-prompt) before attempting to
reuse/getCredential("CUSTOM_PROVIDER_API_KEY"); alternatively prompt the user to
confirm reuse of the saved key when base URLs differ, and only save the new base
URL via saveCredential("CUSTOM_PROVIDER_BASE_URL") after handling the API key
update.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9a98e147-f0d0-4e19-aea6-a7cede9598c0

📥 Commits

Reviewing files that changed from the base of the PR and between 16cf3d4 and 4bc9621.

📒 Files selected for processing (7)
  • Dockerfile
  • README.md
  • bin/lib/inference-config.js
  • bin/lib/onboard.js
  • docs/inference/switch-inference-providers.md
  • nemoclaw-blueprint/blueprint.yaml
  • test/inference-config.test.js
✅ Files skipped from review due to trivial changes (1)
  • docs/inference/switch-inference-providers.md
🚧 Files skipped from review as they are similar to previous changes (4)
  • bin/lib/inference-config.js
  • nemoclaw-blueprint/blueprint.yaml
  • test/inference-config.test.js
  • README.md

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
bin/lib/onboard.js (1)

742-746: Trim the model input for consistency.

customBaseUrl and customApiKey are trimmed, but model is not. Accidental leading/trailing whitespace could cause model routing failures.

Proposed fix
-        model = await prompt("  Model name (e.g. gemini-2.5-flash): ");
+        model = (await prompt("  Model name (e.g. gemini-2.5-flash): ")).trim();
         if (!model) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 742 - 746, Trim the user-entered model
string the same way as customBaseUrl/customApiKey: after reading model from
prompt (the variable model in the onboarding flow), call .trim() and then
validate (e.g., model = model && model.trim(); if (!model) { console.error(" 
Model name is required."); process.exit(1); }) so whitespace-only input is
rejected and stored model has no leading/trailing spaces.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 742-746: Trim the user-entered model string the same way as
customBaseUrl/customApiKey: after reading model from prompt (the variable model
in the onboarding flow), call .trim() and then validate (e.g., model = model &&
model.trim(); if (!model) { console.error("  Model name is required.");
process.exit(1); }) so whitespace-only input is rejected and stored model has no
leading/trailing spaces.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 1223ab49-9265-4598-a1e8-d3c91db64f21

📥 Commits

Reviewing files that changed from the base of the PR and between 4bc9621 and 30723a3.

📒 Files selected for processing (7)
  • Dockerfile
  • README.md
  • bin/lib/inference-config.js
  • bin/lib/onboard.js
  • docs/inference/switch-inference-providers.md
  • nemoclaw-blueprint/blueprint.yaml
  • test/inference-config.test.js
✅ Files skipped from review due to trivial changes (1)
  • docs/inference/switch-inference-providers.md
🚧 Files skipped from review as they are similar to previous changes (5)
  • Dockerfile
  • test/inference-config.test.js
  • bin/lib/inference-config.js
  • nemoclaw-blueprint/blueprint.yaml
  • README.md

Add a "Custom OpenAI-compatible endpoint" option to the onboarding
wizard, allowing users to bring any provider that exposes an
OpenAI-compatible /v1/chat/completions endpoint (e.g. Google Gemini
via AI Studio, OpenRouter, Together AI, LiteLLM).

The custom provider follows the same gateway-routed architecture as
existing providers: the sandbox talks to inference.local, and the
OpenShell gateway proxies to the user's endpoint with credential
injection and model rewriting.

Non-NVIDIA endpoints may reject OpenAI-specific parameters like
"store". Set supportsStore: false in the default openclaw.json model
compat to prevent 400 rejections from strict endpoints. This is safe
for all providers — NVIDIA and Ollama ignore the flag.

Interactive mode prompts for base URL, API key, and model name.
Non-interactive mode reads NEMOCLAW_CUSTOM_BASE_URL,
NEMOCLAW_CUSTOM_API_KEY, and NEMOCLAW_MODEL.

Tested with Google Gemini (gemini-2.5-flash) and local Ollama
(llama3.2) to verify backward compatibility.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
bin/lib/onboard.js (1)

715-759: ⚠️ Potential issue | 🟠 Major

Validate the custom base URL before writing it to the credential store.

Line 722 persists CUSTOM_PROVIDER_BASE_URL before Lines 749-759 validate it and before a replacement key is definitely captured. If the user changes endpoints and then aborts at Line 731, the next run sees the new URL and silently reuses the old CUSTOM_PROVIDER_API_KEY for the wrong provider. This block also only rejects remote http: URLs, so unsupported schemes still pass, and http://[::1]:... needs explicit loopback handling.

🛠️ Proposed fix
         const previousBaseUrl = getCredential("CUSTOM_PROVIDER_BASE_URL");
-        saveCredential("CUSTOM_PROVIDER_BASE_URL", customBaseUrl);
 
         customApiKey = previousBaseUrl === customBaseUrl
           ? getCredential("CUSTOM_PROVIDER_API_KEY")
           : null;
@@
       // Validate base URL
       try {
         const parsed = new URL(customBaseUrl);
-        if (parsed.protocol === "http:" && !["localhost", "127.0.0.1", "::1"].includes(parsed.hostname)) {
+        const isLoopbackHost = ["localhost", "127.0.0.1", "::1", "[::1]"].includes(parsed.hostname);
+        if (!["http:", "https:"].includes(parsed.protocol)) {
+          console.error("  Base URL must use https://, or http:// only for localhost.");
+          process.exit(1);
+        }
+        if (parsed.protocol === "http:" && !isLoopbackHost) {
           console.error("  Insecure http:// URLs are only allowed for localhost. Use https:// for remote endpoints.");
           process.exit(1);
         }
       } catch {
         console.error(`  Invalid URL: ${customBaseUrl}`);
         process.exit(1);
       }
+
+      if (!isNonInteractive()) {
+        saveCredential("CUSTOM_PROVIDER_BASE_URL", customBaseUrl);
+      }

Expected output: a bracketed IPv6 hostname on line 1 and ftp: on line 2.

#!/bin/bash
# Verify IPv6 hostname serialization and that non-HTTP schemes parse successfully.
node - <<'NODE'
console.log(new URL("http://[::1]:4000/v1").hostname);
console.log(new URL("ftp://example.com/v1").protocol);
NODE
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 715 - 759, Currently
CUSTOM_PROVIDER_BASE_URL is saved via saveCredential("CUSTOM_PROVIDER_BASE_URL",
customBaseUrl) before the URL is validated and before deciding whether to keep
or prompt for the API key; move the URL validation (the new URL(...) try/catch
and checks) to immediately after reading customBaseUrl and before calling
saveCredential or deriving customApiKey; reject non-http/https schemes
explicitly (e.g., disallow ftp:, file:, etc.), normalize/handle IPv6 loopback by
recognizing both "::1" and bracketed "[::1]" as localhost, and only permit http
for localhost (127.0.0.1 and ::1) while requiring https for remote hosts; ensure
you only call saveCredential for CUSTOM_PROVIDER_BASE_URL and
CUSTOM_PROVIDER_API_KEY after validation and after the user has
provided/confirmed the API key so an aborted flow does not persist invalid or
mismatched credentials.
🧹 Nitpick comments (1)
test/onboard-selection.test.js (1)

95-138: These cases never hit the onboarding validator.

They only assert new URL() plus a duplicated isLocalhost predicate, so the checks in bin/lib/onboard.js can drift without this suite failing. Please drive setupNim() like the first test in this file, or extract the base-URL validation into a shared helper and test that directly. That would also let this file cover unsupported schemes and the IPv6 loopback form.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@bin/lib/onboard.js`:
- Around line 715-759: Currently CUSTOM_PROVIDER_BASE_URL is saved via
saveCredential("CUSTOM_PROVIDER_BASE_URL", customBaseUrl) before the URL is
validated and before deciding whether to keep or prompt for the API key; move
the URL validation (the new URL(...) try/catch and checks) to immediately after
reading customBaseUrl and before calling saveCredential or deriving
customApiKey; reject non-http/https schemes explicitly (e.g., disallow ftp:,
file:, etc.), normalize/handle IPv6 loopback by recognizing both "::1" and
bracketed "[::1]" as localhost, and only permit http for localhost (127.0.0.1
and ::1) while requiring https for remote hosts; ensure you only call
saveCredential for CUSTOM_PROVIDER_BASE_URL and CUSTOM_PROVIDER_API_KEY after
validation and after the user has provided/confirmed the API key so an aborted
flow does not persist invalid or mismatched credentials.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 33cbec8c-51ea-45ea-bf83-2f430ed0cb4b

📥 Commits

Reviewing files that changed from the base of the PR and between 30723a3 and 637d376.

📒 Files selected for processing (8)
  • Dockerfile
  • README.md
  • bin/lib/inference-config.js
  • bin/lib/onboard.js
  • docs/inference/switch-inference-providers.md
  • nemoclaw-blueprint/blueprint.yaml
  • test/inference-config.test.js
  • test/onboard-selection.test.js
✅ Files skipped from review due to trivial changes (3)
  • docs/inference/switch-inference-providers.md
  • test/inference-config.test.js
  • README.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • Dockerfile
  • nemoclaw-blueprint/blueprint.yaml
  • bin/lib/inference-config.js

@wscurran wscurran added the enhancement: provider Use this label to identify requests to add a new AI provider to NemoClaw. label Mar 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement: provider Use this label to identify requests to add a new AI provider to NemoClaw.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants