Skip to content

feat: add Jetson Orin Nano support#405

Open
realkim93 wants to merge 10 commits intoNVIDIA:mainfrom
realkim93:feat/jetson-orin-nano-support
Open

feat: add Jetson Orin Nano support#405
realkim93 wants to merge 10 commits intoNVIDIA:mainfrom
realkim93:feat/jetson-orin-nano-support

Conversation

@realkim93
Copy link

@realkim93 realkim93 commented Mar 19, 2026

Closes #404

Summary

Adds end-to-end Jetson Orin Nano support: from GPU detection through to a working OpenClaw agent with local Ollama inference.

Verified working

$ openclaw agent --agent main --message 'Say hello in Korean'
안녕하세요! (Hello in Korean)

Inference path: OpenClaw (sandbox) → host.openshell.internal:11434 → Ollama → nemotron-3-nano:4b

Test plan

# tests 35
# pass 33
# fail 0
# skipped 2  (discrete NVIDIA + Apple — expected on Jetson)
  • GPU detection: "NVIDIA Jetson Orin Nano ... Super", 7619 MB
  • Gateway: iptables-legacy patched, k3s starts without panic
  • Sandbox: Phase Ready
  • Inference: openclaw agent responds via nemotron-3-nano:4b
  • Re-run onboard: no port conflict errors
  • Cloud inference (nvidia-nim): unaffected (still routes via inference.local)

Tested on: Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra)

Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com

Summary by CodeRabbit

  • New Features

    • Jetson unified‑memory support, new host setup flow (nemoclaw setup-jetson) and a dedicated Jetson preparation script.
    • Jetson-specific default model selection (4B) and automatic gateway image patching for Jetson iptables.
    • Local Ollama provider now uses a direct host gateway endpoint; sandbox policy and sandbox config ensure local Ollama access.
  • Improvements

    • Preflight/onboard adds extra cleanup, deterministic port handling, and Jetson-aware messaging.
  • Tests

    • Added Jetson-focused GPU and model-selection tests; consolidated GPU probing in tests.

Add GPU detection, iptables-legacy fix, and nemotron-3-nano:4b default
for Jetson Orin Nano Super (8GB, JetPack 6.x).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 19, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Detect Jetson unified-memory GPUs, prefer a smaller Jetson-specific Ollama model, patch the OpenShell gateway image for iptables-legacy on Jetson, add a host setup script and CLI command for Jetson preparation, and update local Ollama routing and tests for Jetson behavior.

Changes

Cohort / File(s) Summary
GPU detection & model selection
bin/lib/nim.js, bin/lib/local-inference.js
detectGpu() now recognizes Jetson unified‑memory devices (jetson: true, nimCapable: false, memory from system RAM). Added DEFAULT_OLLAMA_MODEL_JETSON = "nemotron-3-nano:4b". getOllamaModelOptions(runCapture, gpu) and getDefaultOllamaModel(runCapture, gpu) signatures updated to accept gpu and prefer Jetson-specific model/fallbacks when gpu.jetson.
Onboard workflow & gateway patching
bin/lib/onboard.js
promptOllamaModel() forwards gpu. preflight() frees gateway resources, treats Jetson as unified‑memory (no NIM), and performs gateway cleanup. Added getGatewayImageTag() and patchGatewayImageForJetson() to rebuild/tag the gateway image using iptables-legacy when needed; startGateway(gpu) invokes patching on Jetson. Writes sandbox ~/.nemoclaw/config.json for openclaw sandbox.
CLI & host setup script
bin/nemoclaw.js, scripts/setup-jetson.sh
New nemoclaw setup-jetson command. Added scripts/setup-jetson.sh to detect Jetson, add non‑root user to docker group, ensure nvidia-container-runtime is configured and set as Docker default-runtime, load/persist kernel modules, restart Docker with retries, and print next steps.
Local inference routing & config
bin/lib/inference-config.js, bin/lib/local-inference.js
Exported host gateway URL and getLocalProviderBaseUrl(...). inference-config now uses provider-specific direct endpoints (OLLAMA_DIRECT_URL / VLLM_DIRECT_URL) for ollama-local/vllm-local instead of the shared INFERENCE_ROUTE_URL.
Sandbox network policy
nemoclaw-blueprint/policies/openclaw-sandbox.yaml
Added network_policies.ollama_local allowing sandbox access to host.openshell.internal:11434 for /usr/local/bin/openclaw.
Tests
test/local-inference.test.js, test/nim.test.js, test/inference-config.test.js
Added Jetson model tests and DEFAULT_OLLAMA_MODEL_JETSON import; nim tests probe GPU once and conditionally run Jetson/discrete/Apple checks; ollama-local provider test simplified to assert endpoint and model fields.
New file
scripts/setup-jetson.sh
New executable script (host-side Jetson prep) added.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI as "nemoclaw setup-jetson"
    participant Setup as "scripts/setup-jetson.sh"
    participant Docker as "Docker Daemon"
    User->>CLI: run setup-jetson
    CLI->>Setup: sudo -E bash setup-jetson.sh
    Setup->>Setup: validate OS/arch/root and detect Jetson
    Setup->>Docker: ensure nvidia-container-runtime in daemon.json
    Setup->>Docker: restart docker (systemctl/service)
    Docker-->>Setup: docker recovers
    Setup->>User: print completion & next steps
Loading
sequenceDiagram
    participant User
    participant Onboard as "nemoclaw onboard"
    participant Preflight as preflight()
    participant GPUDetect as nim.detectGpu()
    participant ModelSelect as local-inference
    participant GatewayPatch as patchGatewayImageForJetson()
    participant StartGW as startGateway()
    User->>Onboard: run onboard
    Onboard->>GPUDetect: detectGpu()
    GPUDetect-->>Onboard: { jetson: true, nimCapable: false, totalMemoryMB, name }
    Onboard->>Preflight: preflight(gpu)
    Preflight->>ModelSelect: getDefaultOllamaModel(runCapture, gpu)
    ModelSelect-->>Preflight: return nemotron-3-nano:4b (or first available)
    Preflight->>StartGW: startGateway(gpu)
    StartGW->>GatewayPatch: patchGatewayImageForJetson(gpu)
    GatewayPatch-->>StartGW: patched/tagged image
    StartGW->>StartGW: start gateway container
    StartGW-->>Onboard: gateway ready
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐇 I sniffed a Jetson in the night,
Unified memory, cozy and light.
A smaller model snug in my pack,
Gateway patched—no k3s attack,
Hop! The warren's working right.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.91% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: add Jetson Orin Nano support' directly and clearly summarizes the main objective of the changeset, which adds comprehensive end-to-end Jetson Orin Nano support.
Linked Issues check ✅ Passed The PR successfully addresses all coding requirements from issue #404: GPU detection fallback for Jetson devices, gateway image patching for iptables-legacy, Jetson-specific default Ollama model, local inference routing fix, and network policy updates.
Out of Scope Changes check ✅ Passed All changes are directly related to issue #404 requirements: GPU detection, gateway patching, model selection, inference routing, port conflict handling, and host setup commands. No unrelated changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
scripts/setup-jetson.sh (1)

179-194: Consider extending Docker restart timeout.

The restart polling loop allows 20 seconds (10 × 2s) for Docker to recover. On resource-constrained Jetson devices, Docker might take longer to restart, especially if it needs to pull the NVIDIA runtime shim.

Consider extending to 30 seconds for more headroom on slower devices, though 20s is likely sufficient for most cases.

♻️ Optional: Increase restart timeout
-  for i in 1 2 3 4 5 6 7 8 9 10; do
+  for i in $(seq 1 15); do
     if docker info > /dev/null 2>&1; then
       break
     fi
-    [ "$i" -eq 10 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'."
+    [ "$i" -eq 15 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'."
     sleep 2
   done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-jetson.sh` around lines 179 - 194, The restart polling
currently loops 10 times with a 2s sleep (≈20s total); increase the timeout to
~30s by changing the loop from 1..10 to 1..15 and update the failure check to
compare against 15 (keep the same docker info check and logging around
NEEDS_RESTART, systemctl restart docker, service docker restart/dockerd). This
ensures the existing restart logic and messages remain but gives Docker more
time on slow Jetson devices.
bin/lib/onboard.js (1)

351-406: Well-designed idempotent image patching.

The implementation is solid:

  • Uses Docker label (io.nemoclaw.jetson-patched) for idempotency check
  • Falls back to symlinks if update-alternatives fails
  • Fails explicitly if iptables-legacy is unavailable (important for debugging)
  • Cleans up temp directory after build

One minor note: os is required locally on line 381, but this is fine since it's a core module and keeps the import close to usage. However, os is already imported at line 8 of this file, so this local require is redundant.

♻️ Optional: Remove redundant local require
-  const os = require("os");
-  const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-"));
+  const tmpDir = fs.mkdtempSync(path.join(require("os").tmpdir(), "nemoclaw-jetson-"));

Or simply use the already-imported os from line 8. But since os is not currently imported at the top of this file (only fs and path from core modules), the local require is actually necessary. Disregard this suggestion.

Actually, looking more carefully at the file - line 8 shows const fs = require("fs"); and line 9 shows const path = require("path");, but os is not imported at the top. So the local require is necessary.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 351 - 406, The local require("os") inside
patchGatewayImageForJetson is redundant only if a top-level const os =
require("os") already exists; check the file imports and if os is already
imported at top remove the local const os = require("os") from
patchGatewayImageForJetson and use the top-level os, otherwise leave the local
require in place to avoid breaking the function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 351-406: The local require("os") inside patchGatewayImageForJetson
is redundant only if a top-level const os = require("os") already exists; check
the file imports and if os is already imported at top remove the local const os
= require("os") from patchGatewayImageForJetson and use the top-level os,
otherwise leave the local require in place to avoid breaking the function.

In `@scripts/setup-jetson.sh`:
- Around line 179-194: The restart polling currently loops 10 times with a 2s
sleep (≈20s total); increase the timeout to ~30s by changing the loop from 1..10
to 1..15 and update the failure check to compare against 15 (keep the same
docker info check and logging around NEEDS_RESTART, systemctl restart docker,
service docker restart/dockerd). This ensures the existing restart logic and
messages remain but gives Docker more time on slow Jetson devices.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4b79d258-4025-40eb-a63d-c6475c23cd92

📥 Commits

Reviewing files that changed from the base of the PR and between 3ba517d and e311944.

📒 Files selected for processing (7)
  • bin/lib/local-inference.js
  • bin/lib/nim.js
  • bin/lib/onboard.js
  • bin/nemoclaw.js
  • scripts/setup-jetson.sh
  • test/local-inference.test.js
  • test/nim.test.js

@wscurran wscurran added enhancement New feature or request Platform: AGX Thor/Orin Support for Jetson AGX Thor and Orin labels Mar 19, 2026
@wscurran
Copy link
Contributor

I've taken a look at the changes and see that this PR adds support for the Jetson Orin Nano, which includes implementing a fallback for GPU detection and resolving an issue with iptables-legacy that was causing a k3s panic.

@wscurran wscurran added the priority: medium Issue that should be addressed in upcoming releases label Mar 19, 2026
@realkim93
Copy link
Author

Thanks for reviewing! This was tested on my Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra) — the gateway starts cleanly and the sandbox reaches Ready state with Ollama + nemotron-3-nano:4b. Let me know if there's anything I should adjust or if you'd like me to run additional tests on the device.

OpenShell 0.0.10 does not register inference.local in CoreDNS, so DNS
resolution fails inside the sandbox and OpenClaw cannot reach the local
Ollama instance. This is a known issue (NVIDIA#314, NVIDIA#385, NVIDIA#417).

Workaround: for ollama-local provider, route directly through
host.openshell.internal:11434 (Docker host-gateway alias) instead of
the broken inference.local virtual host. Also add ollama_local network
policy endpoint to the sandbox baseline policy so egress is allowed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@realkim93
Copy link
Author

Added a commit to fix local Ollama inference routing inside the sandbox.

Problem: OpenShell 0.0.10 does not register inference.local in CoreDNS, so DNS resolution fails inside the sandbox. This is the same issue reported in #314, #385, #417 — OpenClaw inside the sandbox cannot reach the host Ollama.

Fix: For ollama-local provider, route directly through host.openshell.internal:11434 (Docker host-gateway alias) instead of inference.local. Also added ollama_local network policy endpoint to openclaw-sandbox.yaml.

All 35 tests pass (33 pass, 2 skipped).

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
test/inference-config.test.js (1)

33-40: Consider: tighten the URL assertion for completeness.

The regex /host\.openshell\.internal:11434/ validates the host and port but doesn't verify the protocol (http://) or path (/v1). A malformed URL like host.openshell.internal:11434 (missing protocol) would still pass.

If full validation is desired, consider a more specific match:

📝 Optional: stricter URL assertion
-    assert.match(cfg.endpointUrl, /host\.openshell\.internal:11434/);
+    assert.equal(cfg.endpointUrl, "http://host.openshell.internal:11434/v1");

Alternatively, restore full deep equality to catch regressions in ncpPartner, profile, or credentialEnv if those properties are expected to remain stable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/inference-config.test.js` around lines 33 - 40, Tighten the URL
assertion in the test that calls getProviderSelectionConfig("ollama-local") by
asserting the full endpoint URL (including protocol and path) rather than just
matching host:port; update the check on cfg.endpointUrl to require
"http://host.openshell.internal:11434/v1" (or the canonical expected URL) and/or
replace the shallow property assertions with a deep equality check of the
returned config object to catch regressions in properties like ncpPartner,
profile, and credentialEnv while still keeping the expected values for
endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and providerLabel.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test/inference-config.test.js`:
- Around line 33-40: Tighten the URL assertion in the test that calls
getProviderSelectionConfig("ollama-local") by asserting the full endpoint URL
(including protocol and path) rather than just matching host:port; update the
check on cfg.endpointUrl to require "http://host.openshell.internal:11434/v1"
(or the canonical expected URL) and/or replace the shallow property assertions
with a deep equality check of the returned config object to catch regressions in
properties like ncpPartner, profile, and credentialEnv while still keeping the
expected values for endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and
providerLabel.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e2affc8e-3a9b-49c3-8c5e-e87f67d07302

📥 Commits

Reviewing files that changed from the base of the PR and between e311944 and 313c356.

📒 Files selected for processing (3)
  • bin/lib/inference-config.js
  • nemoclaw-blueprint/policies/openclaw-sandbox.yaml
  • test/inference-config.test.js

realkim93 and others added 2 commits March 20, 2026 22:22
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…htening

- Guard runCapture().trim() against null in patchGatewayImageForJetson
- Apply same inference.local bypass to vllm-local (same DNS bug affects
  both local providers, not just Ollama)
- Use getLocalProviderBaseUrl() as single source of truth for direct URLs
- Add TODO to remove direct URLs when OpenShell fixes inference.local
- Remove overly broad /usr/local/bin/node from ollama_local network
  policy (openclaw binary alone is sufficient)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
bin/lib/onboard.js (1)

381-404: Clean up the temporary build context in a finally.

docker build throws here on failure, so Line 404 never runs and nemoclaw-jetson-* directories accumulate under the temp dir on repeated failures. A try/finally with fs.rmSync(..., { recursive: true, force: true }) makes the cleanup deterministic and avoids the extra shell-out.

♻️ Suggested change
   const os = require("os");
   const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-"));
   const dockerfile = path.join(tmpDir, "Dockerfile");
-  fs.writeFileSync(
-    dockerfile,
-    [
-      `FROM ${image}`,
-      `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`,
-      `      update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`,
-      `      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`,
-      `      :; \\`,
-      `    elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`,
-      `      ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`,
-      `      ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`,
-      `    else \\`,
-      `      echo "iptables-legacy not available in base image" >&2; exit 1; \\`,
-      `    fi`,
-      `LABEL io.nemoclaw.jetson-patched="true"`,
-      "",
-    ].join("\n")
-  );
-
-  run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false });
-  run(`rm -rf "${tmpDir}"`, { ignoreError: true });
+  try {
+    fs.writeFileSync(
+      dockerfile,
+      [
+        `FROM ${image}`,
+        `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`,
+        `      update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`,
+        `      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`,
+        `      :; \\`,
+        `    elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`,
+        `      ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`,
+        `      ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`,
+        `    else \\`,
+        `      echo "iptables-legacy not available in base image" >&2; exit 1; \\`,
+        `    fi`,
+        `LABEL io.nemoclaw.jetson-patched="true"`,
+        "",
+      ].join("\n")
+    );
+
+    run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false });
+  } finally {
+    fs.rmSync(tmpDir, { recursive: true, force: true });
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 381 - 404, The temp build context created
via fs.mkdtempSync (tmpDir) and populated (dockerfile via fs.writeFileSync) can
leak when run(`docker build...`) throws; wrap the build and any operations that
may throw in a try/finally around the run call(s) so cleanup always happens, and
in the finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead
of run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-*
directory; adjust ordering so tmpDir is removed after the build attempt and keep
existing ignoreError semantics removed in favor of the synchronous rmSync call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bin/lib/onboard.js`:
- Around line 146-148: The sandbox config builder incorrectly maps non-default
local Ollama models to "nvidia-nim" by relying on DEFAULT_OLLAMA_MODEL; update
buildSandboxConfigSyncScript to determine provider type from the selected
provider or endpoint metadata (e.g., inspect the selected provider object or
endpoint.provider/type field returned by promptOllamaModel or
getOllamaModelOptions) instead of comparing model names to DEFAULT_OLLAMA_MODEL,
and apply the same metadata-driven logic to the other mapping block that mirrors
this behavior (the later duplicate mapping). Locate references to
DEFAULT_OLLAMA_MODEL and replace the conditional mapping with logic that reads
provider.type/providerName or endpoint metadata and sets the sandbox provider to
that derived value.
- Around line 279-285: Move the "kill $(lsof -ti :18789 -c openclaw) ..."
cleanup so it runs after the sandbox recreate decision in createSandbox()
instead of before it: currently the port-18789 shutdown is executed
unconditionally near the top of onboard.js (adjacent to the existing run(...)
for gateway destroy), which can stop a valid dashboard forward before
createSandbox() chooses the existing-sandbox path; update the control flow to
keep the existing early cleanup for the nemoclaw gateway but defer the openclaw
port-18789 kill until after createSandbox() determines whether to recreate
(i.e., run the kill only in the branch that actually recreates/restarts the
sandbox/dashboard, not in the pre-check path).

---

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 381-404: The temp build context created via fs.mkdtempSync
(tmpDir) and populated (dockerfile via fs.writeFileSync) can leak when
run(`docker build...`) throws; wrap the build and any operations that may throw
in a try/finally around the run call(s) so cleanup always happens, and in the
finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead of
run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-* directory;
adjust ordering so tmpDir is removed after the build attempt and keep existing
ignoreError semantics removed in favor of the synchronous rmSync call.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2fc76141-d389-4c2e-88ba-76308543f6a0

📥 Commits

Reviewing files that changed from the base of the PR and between 78d320c and 980e3ab.

📒 Files selected for processing (3)
  • bin/lib/inference-config.js
  • bin/lib/onboard.js
  • nemoclaw-blueprint/policies/openclaw-sandbox.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • nemoclaw-blueprint/policies/openclaw-sandbox.yaml
  • bin/lib/inference-config.js

The NemoClaw OpenClaw plugin reads ~/.nemoclaw/config.json to display
the configured endpoint and model in its startup banner. Without this
file in the sandbox, it falls back to "build.nvidia.com" and the 120B
cloud model — misleading on Jetson where Ollama is the actual provider.

Write the provider selection config into the sandbox during setupOpenclaw
so the plugin banner correctly shows "ollama" + "nemotron-3-nano:4b".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
bin/lib/onboard.js (2)

685-700: ⚠️ Potential issue | 🟠 Major

Jetson default model selection still conflicts with sandbox provider serialization.

Good call passing gpu here, but this now more reliably picks a Jetson-specific Ollama default, which still collides with the model-name heuristic in buildSandboxConfigSyncScript() (local Ollama can be written as nvidia-nim).

Suggested fix at the root cause (provider derivation)
 function buildSandboxConfigSyncScript(selectionConfig) {
-  const providerType =
-    selectionConfig.profile === "inference-local"
-      ? selectionConfig.model === DEFAULT_OLLAMA_MODEL
-        ? "ollama-local"
-        : "nvidia-nim"
-      : selectionConfig.endpointType === "vllm"
-        ? "vllm-local"
-        : "nvidia-nim";
+  const localKind = String(
+    selectionConfig.provider || selectionConfig.endpointType || ""
+  ).toLowerCase();
+  const providerType =
+    selectionConfig.profile === "inference-local"
+      ? localKind.includes("ollama")
+        ? "ollama-local"
+        : localKind.includes("vllm")
+          ? "vllm-local"
+          : "nvidia-nim"
+      : "nvidia-nim";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 685 - 700, The chosen Jetson-specific
default model (via getDefaultOllamaModel / promptOllamaModel) can still conflict
with the provider-detection heuristic in buildSandboxConfigSyncScript; fix this
by normalizing model names or deriving provider from the explicit provider
variable instead of the model string: update buildSandboxConfigSyncScript to use
the provider variable (provider === "ollama-local") or add a normalization
helper (e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific
names like "nvidia-nim" to the canonical form used by sandbox serialization, and
ensure getDefaultOllamaModel and promptOllamaModel return/consult that
normalized value so provider serialization remains consistent with the selected
model.

279-285: ⚠️ Potential issue | 🟠 Major

Defer port 18789 process cleanup until recreate/create is confirmed.

Line 284 still unconditionally kills the dashboard-forward process before createSandbox() decides whether to recreate. If the user keeps an existing sandbox, onboarding can exit with the dashboard no longer forwarded.

Proposed control-flow fix
-  run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true });
   sleep(2);
 async function createSandbox(gpu) {
   step(3, 7, "Creating sandbox");
@@
   if (existing) {
@@
       if (recreate.toLowerCase() !== "y") {
         console.log("  Keeping existing sandbox.");
         return sandboxName;
       }
@@
   }
+
+  // Only clean stale dashboard forwards/processes when we are actually creating/recreating.
+  run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 279 - 285, The unconditional kill of the
dashboard-forward process (the run invocation that executes "kill $(lsof -ti
:18789 -c openclaw) ...") should be deferred until after createSandbox()
determines a recreate/create is required; change control flow so createSandbox()
(or the caller that decides recreate) runs first and only when it indicates a
new sandbox will be created/recreated do we execute the run("kill $(lsof -ti
:18789 -c openclaw) 2>/dev/null || true", { ignoreError: true }) cleanup (and
keep sleep(2) paired with that cleanup if still needed).
🧹 Nitpick comments (1)
bin/lib/onboard.js (1)

812-824: Remove unreachable duplicate config-write block.

buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json and exits; the appended nemoClawConfigScript after ${script} never executes.

Cleanup diff
-    // Also write ~/.nemoclaw/config.json inside the sandbox so the NemoClaw
-    // plugin displays the correct endpoint/model in its banner instead of
-    // falling back to the cloud defaults.
-    const nemoClawConfigScript = `
-mkdir -p ~/.nemoclaw
-cat > ~/.nemoclaw/config.json <<'EOF_NEMOCLAW_CFG'
-${JSON.stringify(sandboxConfig, null, 2)}
-EOF_NEMOCLAW_CFG
-`;
     run(`cat <<'EOF_NEMOCLAW_SYNC' | openshell sandbox connect "${sandboxName}"
 ${script}
-${nemoClawConfigScript}
 exit
 EOF_NEMOCLAW_SYNC`, { stdio: ["ignore", "ignore", "inherit"] });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 812 - 824, The appended nemoClawConfigScript
block is redundant because buildSandboxConfigSyncScript() already writes
~/.nemoclaw/config.json and exits, so remove the duplicate: delete the
nemoClawConfigScript constant and stop injecting ${nemoClawConfigScript} into
the run(...) payload (the run call that uses openshell sandbox connect
"${sandboxName}"), leaving only the previously built ${script} so that the
remote script exits as intended; reference buildSandboxConfigSyncScript(),
nemoClawConfigScript, and the run(...) invocation to locate the code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@bin/lib/onboard.js`:
- Around line 685-700: The chosen Jetson-specific default model (via
getDefaultOllamaModel / promptOllamaModel) can still conflict with the
provider-detection heuristic in buildSandboxConfigSyncScript; fix this by
normalizing model names or deriving provider from the explicit provider variable
instead of the model string: update buildSandboxConfigSyncScript to use the
provider variable (provider === "ollama-local") or add a normalization helper
(e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific names
like "nvidia-nim" to the canonical form used by sandbox serialization, and
ensure getDefaultOllamaModel and promptOllamaModel return/consult that
normalized value so provider serialization remains consistent with the selected
model.
- Around line 279-285: The unconditional kill of the dashboard-forward process
(the run invocation that executes "kill $(lsof -ti :18789 -c openclaw) ...")
should be deferred until after createSandbox() determines a recreate/create is
required; change control flow so createSandbox() (or the caller that decides
recreate) runs first and only when it indicates a new sandbox will be
created/recreated do we execute the run("kill $(lsof -ti :18789 -c openclaw)
2>/dev/null || true", { ignoreError: true }) cleanup (and keep sleep(2) paired
with that cleanup if still needed).

---

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 812-824: The appended nemoClawConfigScript block is redundant
because buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json
and exits, so remove the duplicate: delete the nemoClawConfigScript constant and
stop injecting ${nemoClawConfigScript} into the run(...) payload (the run call
that uses openshell sandbox connect "${sandboxName}"), leaving only the
previously built ${script} so that the remote script exits as intended;
reference buildSandboxConfigSyncScript(), nemoClawConfigScript, and the run(...)
invocation to locate the code to change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f6b98de2-873f-4e85-a20b-901f36fef869

📥 Commits

Reviewing files that changed from the base of the PR and between 980e3ab and dc3c12c.

📒 Files selected for processing (1)
  • bin/lib/onboard.js

The Docker image bakes in a nvidia provider pointing to inference.local
which fails DNS inside the sandbox. The setupOpenclaw step already
writes the correct 'inference' provider, but the stale 'nvidia' entry
remains and can cause OpenClaw to attempt the broken route first.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ericksoa
Copy link
Contributor

Hey @realkim93 — we independently hit the same issue (#539) and arrived at the same iptables-legacy image-patching approach, so we're confident the core fix here is sound.

We did a full review of the diff and everything looks clean. One thing we noticed (also flagged by CodeRabbit): the port-18789 cleanup in preflight() runs before createSandbox() decides whether to recreate, which could leave a healthy sandbox without its dashboard forward on a no-op rerun. Could you take a look at that timing issue?

Once that's addressed we're happy to approve from our side. Nice work getting this tested on real hardware.

… and cleanup safety

- Defer port-18789 kill to createSandbox() after recreate decision so
  no-op reruns don't break a healthy dashboard forward
- Derive provider type from selectionConfig.provider metadata instead of
  comparing model names to DEFAULT_OLLAMA_MODEL (fixes Jetson misclassification)
- Wrap patchGatewayImageForJetson tmpDir in try/finally with fs.rmSync
- Remove unreachable duplicate nemoClawConfigScript in setupOpenclaw
- Extend Docker restart timeout to 30s for slower Jetson devices

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@realkim93
Copy link
Author

realkim93 commented Mar 23, 2026

Thanks @ericksoa and @wscurran for the thoughtful review. Really appreciate you taking the time to go through the diff carefully.

@ericksoa Thank you for raising the port-18789 timing issue. I've pushed a fix that defers the dashboard-forward cleanup into createSandbox(), so it only runs when we're actually recreating or creating a new sandbox. The port-18789 availability check in preflight is also removed since that port is now fully managed inside createSandbox(). Also addressed the provider-type mapping that CodeRabbit flagged: it now derives from provider metadata instead of comparing model names, which should handle the Jetson default model correctly.

Other cleanup in this commit:

  • patchGatewayImageForJetson tmpDir wrapped in try/finally
  • Removed unreachable duplicate config-write block in setupOpenclaw
  • Extended Docker restart timeout in setup-jetson.sh for slower Jetson devices

Let me know if there's anything else that needs attention!

realkim93 and others added 3 commits March 23, 2026 10:43
The previous commit deferred the port-18789 kill to createSandbox(), but
left the port availability check in preflight. This caused a hard exit
when re-running onboard with an existing dashboard forward still active.
Port 18789 is now fully managed inside createSandbox().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- nemoclaw.js: merge both setup-jetson and debug/uninstall/version commands,
  use main's color-formatted help with added setup-jetson line
- onboard.js: adopt main's immutable openclaw.json approach and
  conditional stale gateway cleanup via hasStaleGateway()
- inference-config.test.js: adopt main's expect/toEqual style with
  INFERENCE_ROUTE_URL
- nim.test.js: keep Jetson-specific GPU test with main's expect style

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- inference-config.test: use getLocalProviderBaseUrl() for ollama-local
  endpoint URL (host-gateway bypass for OpenShell 0.0.10 DNS issue)
- local-inference.test: convert assert → expect (vitest) for jetson tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request Platform: AGX Thor/Orin Support for Jetson AGX Thor and Orin priority: medium Issue that should be addressed in upcoming releases

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Jetson Orin Nano support — GPU detection fails, k3s panics on iptables

3 participants