Conversation
Add GPU detection, iptables-legacy fix, and nemotron-3-nano:4b default for Jetson Orin Nano Super (8GB, JetPack 6.x). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughDetect Jetson unified-memory GPUs, prefer a smaller Jetson-specific Ollama model, patch the OpenShell gateway image for iptables-legacy on Jetson, add a host setup script and CLI command for Jetson preparation, and update local Ollama routing and tests for Jetson behavior. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as "nemoclaw setup-jetson"
participant Setup as "scripts/setup-jetson.sh"
participant Docker as "Docker Daemon"
User->>CLI: run setup-jetson
CLI->>Setup: sudo -E bash setup-jetson.sh
Setup->>Setup: validate OS/arch/root and detect Jetson
Setup->>Docker: ensure nvidia-container-runtime in daemon.json
Setup->>Docker: restart docker (systemctl/service)
Docker-->>Setup: docker recovers
Setup->>User: print completion & next steps
sequenceDiagram
participant User
participant Onboard as "nemoclaw onboard"
participant Preflight as preflight()
participant GPUDetect as nim.detectGpu()
participant ModelSelect as local-inference
participant GatewayPatch as patchGatewayImageForJetson()
participant StartGW as startGateway()
User->>Onboard: run onboard
Onboard->>GPUDetect: detectGpu()
GPUDetect-->>Onboard: { jetson: true, nimCapable: false, totalMemoryMB, name }
Onboard->>Preflight: preflight(gpu)
Preflight->>ModelSelect: getDefaultOllamaModel(runCapture, gpu)
ModelSelect-->>Preflight: return nemotron-3-nano:4b (or first available)
Preflight->>StartGW: startGateway(gpu)
StartGW->>GatewayPatch: patchGatewayImageForJetson(gpu)
GatewayPatch-->>StartGW: patched/tagged image
StartGW->>StartGW: start gateway container
StartGW-->>Onboard: gateway ready
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
scripts/setup-jetson.sh (1)
179-194: Consider extending Docker restart timeout.The restart polling loop allows 20 seconds (10 × 2s) for Docker to recover. On resource-constrained Jetson devices, Docker might take longer to restart, especially if it needs to pull the NVIDIA runtime shim.
Consider extending to 30 seconds for more headroom on slower devices, though 20s is likely sufficient for most cases.
♻️ Optional: Increase restart timeout
- for i in 1 2 3 4 5 6 7 8 9 10; do + for i in $(seq 1 15); do if docker info > /dev/null 2>&1; then break fi - [ "$i" -eq 10 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'." + [ "$i" -eq 15 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'." sleep 2 done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/setup-jetson.sh` around lines 179 - 194, The restart polling currently loops 10 times with a 2s sleep (≈20s total); increase the timeout to ~30s by changing the loop from 1..10 to 1..15 and update the failure check to compare against 15 (keep the same docker info check and logging around NEEDS_RESTART, systemctl restart docker, service docker restart/dockerd). This ensures the existing restart logic and messages remain but gives Docker more time on slow Jetson devices.bin/lib/onboard.js (1)
351-406: Well-designed idempotent image patching.The implementation is solid:
- Uses Docker label (
io.nemoclaw.jetson-patched) for idempotency check- Falls back to symlinks if
update-alternativesfails- Fails explicitly if iptables-legacy is unavailable (important for debugging)
- Cleans up temp directory after build
One minor note:
osis required locally on line 381, but this is fine since it's a core module and keeps the import close to usage. However,osis already imported at line 8 of this file, so this local require is redundant.♻️ Optional: Remove redundant local require
- const os = require("os"); - const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-")); + const tmpDir = fs.mkdtempSync(path.join(require("os").tmpdir(), "nemoclaw-jetson-"));Or simply use the already-imported
osfrom line 8. But sinceosis not currently imported at the top of this file (onlyfsandpathfrom core modules), the local require is actually necessary. Disregard this suggestion.Actually, looking more carefully at the file - line 8 shows
const fs = require("fs");and line 9 showsconst path = require("path");, butosis not imported at the top. So the local require is necessary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 351 - 406, The local require("os") inside patchGatewayImageForJetson is redundant only if a top-level const os = require("os") already exists; check the file imports and if os is already imported at top remove the local const os = require("os") from patchGatewayImageForJetson and use the top-level os, otherwise leave the local require in place to avoid breaking the function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 351-406: The local require("os") inside patchGatewayImageForJetson
is redundant only if a top-level const os = require("os") already exists; check
the file imports and if os is already imported at top remove the local const os
= require("os") from patchGatewayImageForJetson and use the top-level os,
otherwise leave the local require in place to avoid breaking the function.
In `@scripts/setup-jetson.sh`:
- Around line 179-194: The restart polling currently loops 10 times with a 2s
sleep (≈20s total); increase the timeout to ~30s by changing the loop from 1..10
to 1..15 and update the failure check to compare against 15 (keep the same
docker info check and logging around NEEDS_RESTART, systemctl restart docker,
service docker restart/dockerd). This ensures the existing restart logic and
messages remain but gives Docker more time on slow Jetson devices.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4b79d258-4025-40eb-a63d-c6475c23cd92
📒 Files selected for processing (7)
bin/lib/local-inference.jsbin/lib/nim.jsbin/lib/onboard.jsbin/nemoclaw.jsscripts/setup-jetson.shtest/local-inference.test.jstest/nim.test.js
|
I've taken a look at the changes and see that this PR adds support for the Jetson Orin Nano, which includes implementing a fallback for GPU detection and resolving an issue with iptables-legacy that was causing a k3s panic. |
|
Thanks for reviewing! This was tested on my Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra) — the gateway starts cleanly and the sandbox reaches Ready state with Ollama + nemotron-3-nano:4b. Let me know if there's anything I should adjust or if you'd like me to run additional tests on the device. |
OpenShell 0.0.10 does not register inference.local in CoreDNS, so DNS resolution fails inside the sandbox and OpenClaw cannot reach the local Ollama instance. This is a known issue (NVIDIA#314, NVIDIA#385, NVIDIA#417). Workaround: for ollama-local provider, route directly through host.openshell.internal:11434 (Docker host-gateway alias) instead of the broken inference.local virtual host. Also add ollama_local network policy endpoint to the sandbox baseline policy so egress is allowed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Added a commit to fix local Ollama inference routing inside the sandbox. Problem: OpenShell 0.0.10 does not register Fix: For All 35 tests pass (33 pass, 2 skipped). |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
test/inference-config.test.js (1)
33-40: Consider: tighten the URL assertion for completeness.The regex
/host\.openshell\.internal:11434/validates the host and port but doesn't verify the protocol (http://) or path (/v1). A malformed URL likehost.openshell.internal:11434(missing protocol) would still pass.If full validation is desired, consider a more specific match:
📝 Optional: stricter URL assertion
- assert.match(cfg.endpointUrl, /host\.openshell\.internal:11434/); + assert.equal(cfg.endpointUrl, "http://host.openshell.internal:11434/v1");Alternatively, restore full deep equality to catch regressions in
ncpPartner,profile, orcredentialEnvif those properties are expected to remain stable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/inference-config.test.js` around lines 33 - 40, Tighten the URL assertion in the test that calls getProviderSelectionConfig("ollama-local") by asserting the full endpoint URL (including protocol and path) rather than just matching host:port; update the check on cfg.endpointUrl to require "http://host.openshell.internal:11434/v1" (or the canonical expected URL) and/or replace the shallow property assertions with a deep equality check of the returned config object to catch regressions in properties like ncpPartner, profile, and credentialEnv while still keeping the expected values for endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and providerLabel.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/inference-config.test.js`:
- Around line 33-40: Tighten the URL assertion in the test that calls
getProviderSelectionConfig("ollama-local") by asserting the full endpoint URL
(including protocol and path) rather than just matching host:port; update the
check on cfg.endpointUrl to require "http://host.openshell.internal:11434/v1"
(or the canonical expected URL) and/or replace the shallow property assertions
with a deep equality check of the returned config object to catch regressions in
properties like ncpPartner, profile, and credentialEnv while still keeping the
expected values for endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and
providerLabel.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e2affc8e-3a9b-49c3-8c5e-e87f67d07302
📒 Files selected for processing (3)
bin/lib/inference-config.jsnemoclaw-blueprint/policies/openclaw-sandbox.yamltest/inference-config.test.js
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…htening - Guard runCapture().trim() against null in patchGatewayImageForJetson - Apply same inference.local bypass to vllm-local (same DNS bug affects both local providers, not just Ollama) - Use getLocalProviderBaseUrl() as single source of truth for direct URLs - Add TODO to remove direct URLs when OpenShell fixes inference.local - Remove overly broad /usr/local/bin/node from ollama_local network policy (openclaw binary alone is sufficient) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
bin/lib/onboard.js (1)
381-404: Clean up the temporary build context in afinally.
docker buildthrows here on failure, so Line 404 never runs andnemoclaw-jetson-*directories accumulate under the temp dir on repeated failures. Atry/finallywithfs.rmSync(..., { recursive: true, force: true })makes the cleanup deterministic and avoids the extra shell-out.♻️ Suggested change
const os = require("os"); const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-")); const dockerfile = path.join(tmpDir, "Dockerfile"); - fs.writeFileSync( - dockerfile, - [ - `FROM ${image}`, - `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`, - ` update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`, - ` update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`, - ` :; \\`, - ` elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`, - ` ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`, - ` ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`, - ` else \\`, - ` echo "iptables-legacy not available in base image" >&2; exit 1; \\`, - ` fi`, - `LABEL io.nemoclaw.jetson-patched="true"`, - "", - ].join("\n") - ); - - run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false }); - run(`rm -rf "${tmpDir}"`, { ignoreError: true }); + try { + fs.writeFileSync( + dockerfile, + [ + `FROM ${image}`, + `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`, + ` update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`, + ` update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`, + ` :; \\`, + ` elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`, + ` ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`, + ` ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`, + ` else \\`, + ` echo "iptables-legacy not available in base image" >&2; exit 1; \\`, + ` fi`, + `LABEL io.nemoclaw.jetson-patched="true"`, + "", + ].join("\n") + ); + + run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false }); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 381 - 404, The temp build context created via fs.mkdtempSync (tmpDir) and populated (dockerfile via fs.writeFileSync) can leak when run(`docker build...`) throws; wrap the build and any operations that may throw in a try/finally around the run call(s) so cleanup always happens, and in the finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead of run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-* directory; adjust ordering so tmpDir is removed after the build attempt and keep existing ignoreError semantics removed in favor of the synchronous rmSync call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Around line 146-148: The sandbox config builder incorrectly maps non-default
local Ollama models to "nvidia-nim" by relying on DEFAULT_OLLAMA_MODEL; update
buildSandboxConfigSyncScript to determine provider type from the selected
provider or endpoint metadata (e.g., inspect the selected provider object or
endpoint.provider/type field returned by promptOllamaModel or
getOllamaModelOptions) instead of comparing model names to DEFAULT_OLLAMA_MODEL,
and apply the same metadata-driven logic to the other mapping block that mirrors
this behavior (the later duplicate mapping). Locate references to
DEFAULT_OLLAMA_MODEL and replace the conditional mapping with logic that reads
provider.type/providerName or endpoint metadata and sets the sandbox provider to
that derived value.
- Around line 279-285: Move the "kill $(lsof -ti :18789 -c openclaw) ..."
cleanup so it runs after the sandbox recreate decision in createSandbox()
instead of before it: currently the port-18789 shutdown is executed
unconditionally near the top of onboard.js (adjacent to the existing run(...)
for gateway destroy), which can stop a valid dashboard forward before
createSandbox() chooses the existing-sandbox path; update the control flow to
keep the existing early cleanup for the nemoclaw gateway but defer the openclaw
port-18789 kill until after createSandbox() determines whether to recreate
(i.e., run the kill only in the branch that actually recreates/restarts the
sandbox/dashboard, not in the pre-check path).
---
Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 381-404: The temp build context created via fs.mkdtempSync
(tmpDir) and populated (dockerfile via fs.writeFileSync) can leak when
run(`docker build...`) throws; wrap the build and any operations that may throw
in a try/finally around the run call(s) so cleanup always happens, and in the
finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead of
run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-* directory;
adjust ordering so tmpDir is removed after the build attempt and keep existing
ignoreError semantics removed in favor of the synchronous rmSync call.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2fc76141-d389-4c2e-88ba-76308543f6a0
📒 Files selected for processing (3)
bin/lib/inference-config.jsbin/lib/onboard.jsnemoclaw-blueprint/policies/openclaw-sandbox.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- nemoclaw-blueprint/policies/openclaw-sandbox.yaml
- bin/lib/inference-config.js
The NemoClaw OpenClaw plugin reads ~/.nemoclaw/config.json to display the configured endpoint and model in its startup banner. Without this file in the sandbox, it falls back to "build.nvidia.com" and the 120B cloud model — misleading on Jetson where Ollama is the actual provider. Write the provider selection config into the sandbox during setupOpenclaw so the plugin banner correctly shows "ollama" + "nemotron-3-nano:4b". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
bin/lib/onboard.js (2)
685-700:⚠️ Potential issue | 🟠 MajorJetson default model selection still conflicts with sandbox provider serialization.
Good call passing
gpuhere, but this now more reliably picks a Jetson-specific Ollama default, which still collides with the model-name heuristic inbuildSandboxConfigSyncScript()(local Ollama can be written asnvidia-nim).Suggested fix at the root cause (provider derivation)
function buildSandboxConfigSyncScript(selectionConfig) { - const providerType = - selectionConfig.profile === "inference-local" - ? selectionConfig.model === DEFAULT_OLLAMA_MODEL - ? "ollama-local" - : "nvidia-nim" - : selectionConfig.endpointType === "vllm" - ? "vllm-local" - : "nvidia-nim"; + const localKind = String( + selectionConfig.provider || selectionConfig.endpointType || "" + ).toLowerCase(); + const providerType = + selectionConfig.profile === "inference-local" + ? localKind.includes("ollama") + ? "ollama-local" + : localKind.includes("vllm") + ? "vllm-local" + : "nvidia-nim" + : "nvidia-nim";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 685 - 700, The chosen Jetson-specific default model (via getDefaultOllamaModel / promptOllamaModel) can still conflict with the provider-detection heuristic in buildSandboxConfigSyncScript; fix this by normalizing model names or deriving provider from the explicit provider variable instead of the model string: update buildSandboxConfigSyncScript to use the provider variable (provider === "ollama-local") or add a normalization helper (e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific names like "nvidia-nim" to the canonical form used by sandbox serialization, and ensure getDefaultOllamaModel and promptOllamaModel return/consult that normalized value so provider serialization remains consistent with the selected model.
279-285:⚠️ Potential issue | 🟠 MajorDefer port 18789 process cleanup until recreate/create is confirmed.
Line 284 still unconditionally kills the dashboard-forward process before
createSandbox()decides whether to recreate. If the user keeps an existing sandbox, onboarding can exit with the dashboard no longer forwarded.Proposed control-flow fix
- run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true }); sleep(2);async function createSandbox(gpu) { step(3, 7, "Creating sandbox"); @@ if (existing) { @@ if (recreate.toLowerCase() !== "y") { console.log(" Keeping existing sandbox."); return sandboxName; } @@ } + + // Only clean stale dashboard forwards/processes when we are actually creating/recreating. + run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 279 - 285, The unconditional kill of the dashboard-forward process (the run invocation that executes "kill $(lsof -ti :18789 -c openclaw) ...") should be deferred until after createSandbox() determines a recreate/create is required; change control flow so createSandbox() (or the caller that decides recreate) runs first and only when it indicates a new sandbox will be created/recreated do we execute the run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true }) cleanup (and keep sleep(2) paired with that cleanup if still needed).
🧹 Nitpick comments (1)
bin/lib/onboard.js (1)
812-824: Remove unreachable duplicate config-write block.
buildSandboxConfigSyncScript()already writes~/.nemoclaw/config.jsonand exits; the appendednemoClawConfigScriptafter${script}never executes.Cleanup diff
- // Also write ~/.nemoclaw/config.json inside the sandbox so the NemoClaw - // plugin displays the correct endpoint/model in its banner instead of - // falling back to the cloud defaults. - const nemoClawConfigScript = ` -mkdir -p ~/.nemoclaw -cat > ~/.nemoclaw/config.json <<'EOF_NEMOCLAW_CFG' -${JSON.stringify(sandboxConfig, null, 2)} -EOF_NEMOCLAW_CFG -`; run(`cat <<'EOF_NEMOCLAW_SYNC' | openshell sandbox connect "${sandboxName}" ${script} -${nemoClawConfigScript} exit EOF_NEMOCLAW_SYNC`, { stdio: ["ignore", "ignore", "inherit"] });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 812 - 824, The appended nemoClawConfigScript block is redundant because buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json and exits, so remove the duplicate: delete the nemoClawConfigScript constant and stop injecting ${nemoClawConfigScript} into the run(...) payload (the run call that uses openshell sandbox connect "${sandboxName}"), leaving only the previously built ${script} so that the remote script exits as intended; reference buildSandboxConfigSyncScript(), nemoClawConfigScript, and the run(...) invocation to locate the code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@bin/lib/onboard.js`:
- Around line 685-700: The chosen Jetson-specific default model (via
getDefaultOllamaModel / promptOllamaModel) can still conflict with the
provider-detection heuristic in buildSandboxConfigSyncScript; fix this by
normalizing model names or deriving provider from the explicit provider variable
instead of the model string: update buildSandboxConfigSyncScript to use the
provider variable (provider === "ollama-local") or add a normalization helper
(e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific names
like "nvidia-nim" to the canonical form used by sandbox serialization, and
ensure getDefaultOllamaModel and promptOllamaModel return/consult that
normalized value so provider serialization remains consistent with the selected
model.
- Around line 279-285: The unconditional kill of the dashboard-forward process
(the run invocation that executes "kill $(lsof -ti :18789 -c openclaw) ...")
should be deferred until after createSandbox() determines a recreate/create is
required; change control flow so createSandbox() (or the caller that decides
recreate) runs first and only when it indicates a new sandbox will be
created/recreated do we execute the run("kill $(lsof -ti :18789 -c openclaw)
2>/dev/null || true", { ignoreError: true }) cleanup (and keep sleep(2) paired
with that cleanup if still needed).
---
Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 812-824: The appended nemoClawConfigScript block is redundant
because buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json
and exits, so remove the duplicate: delete the nemoClawConfigScript constant and
stop injecting ${nemoClawConfigScript} into the run(...) payload (the run call
that uses openshell sandbox connect "${sandboxName}"), leaving only the
previously built ${script} so that the remote script exits as intended;
reference buildSandboxConfigSyncScript(), nemoClawConfigScript, and the run(...)
invocation to locate the code to change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f6b98de2-873f-4e85-a20b-901f36fef869
📒 Files selected for processing (1)
bin/lib/onboard.js
The Docker image bakes in a nvidia provider pointing to inference.local which fails DNS inside the sandbox. The setupOpenclaw step already writes the correct 'inference' provider, but the stale 'nvidia' entry remains and can cause OpenClaw to attempt the broken route first. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Hey @realkim93 — we independently hit the same issue (#539) and arrived at the same iptables-legacy image-patching approach, so we're confident the core fix here is sound. We did a full review of the diff and everything looks clean. One thing we noticed (also flagged by CodeRabbit): the port-18789 cleanup in Once that's addressed we're happy to approve from our side. Nice work getting this tested on real hardware. |
… and cleanup safety - Defer port-18789 kill to createSandbox() after recreate decision so no-op reruns don't break a healthy dashboard forward - Derive provider type from selectionConfig.provider metadata instead of comparing model names to DEFAULT_OLLAMA_MODEL (fixes Jetson misclassification) - Wrap patchGatewayImageForJetson tmpDir in try/finally with fs.rmSync - Remove unreachable duplicate nemoClawConfigScript in setupOpenclaw - Extend Docker restart timeout to 30s for slower Jetson devices Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Thanks @ericksoa and @wscurran for the thoughtful review. Really appreciate you taking the time to go through the diff carefully. @ericksoa Thank you for raising the port-18789 timing issue. I've pushed a fix that defers the dashboard-forward cleanup into Other cleanup in this commit:
Let me know if there's anything else that needs attention! |
The previous commit deferred the port-18789 kill to createSandbox(), but left the port availability check in preflight. This caused a hard exit when re-running onboard with an existing dashboard forward still active. Port 18789 is now fully managed inside createSandbox(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- nemoclaw.js: merge both setup-jetson and debug/uninstall/version commands, use main's color-formatted help with added setup-jetson line - onboard.js: adopt main's immutable openclaw.json approach and conditional stale gateway cleanup via hasStaleGateway() - inference-config.test.js: adopt main's expect/toEqual style with INFERENCE_ROUTE_URL - nim.test.js: keep Jetson-specific GPU test with main's expect style Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- inference-config.test: use getLocalProviderBaseUrl() for ollama-local endpoint URL (host-gateway bypass for OpenShell 0.0.10 DNS issue) - local-inference.test: convert assert → expect (vitest) for jetson tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes #404
Summary
Adds end-to-end Jetson Orin Nano support: from GPU detection through to a working OpenClaw agent with local Ollama inference.
[N/A]for memory.total on Jetson (unified memory). Added fallback via GPU name +/proc/device-tree/model, reading system RAM withfree(1).nft_chain_filtermodules → k3s panic. Auto-rebuilds gateway image with iptables-legacy (idempotent via Docker label).nemotron-3-nano:4bfor Jetson (30B default exceeds 8GB).inference.localin CoreDNS ([Bug] inference.local returns HTTP 403 inside sandbox when using Ollama local inference on DGX Spark #314, Local Ollama inference routing fails from sandbox #385, Sandbox cannot reach local Ollama instance on host due to enforced internal proxy (403 Forbidden) #417). Local providers (Ollama, vLLM) now route directly viahost.openshell.internalinstead.ollama_localendpoint to sandbox baseline policy.setup-jetsoncommand: Host-level prep (NVIDIA runtime, kernel modules).Verified working
Inference path: OpenClaw (sandbox) → host.openshell.internal:11434 → Ollama → nemotron-3-nano:4b
Test plan
"NVIDIA Jetson Orin Nano ... Super", 7619 MBopenclaw agentresponds via nemotron-3-nano:4bTested on: Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra)
Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com
Summary by CodeRabbit
New Features
Improvements
Tests