Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 13 additions & 8 deletions docs/about/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,18 @@ content:

# How OpenShell Works

OpenShell runs as a [k3s](https://k3s.io/) Kubernetes cluster inside a Docker container. Each sandbox is an isolated Kubernetes pod managed through the gateway. Four components work together to keep agents secure.
OpenShell runs as a [K3s](https://k3s.io/) Kubernetes cluster inside a Docker container. Each sandbox is an isolated Kubernetes pod managed through the gateway. Four components work together to keep agents secure.

```{image} architecture.svg
:alt: OpenShell architecture diagram showing the OpenShell component layout
```{figure} architecture.svg
:alt: OpenShell architecture diagram showing the component layout
:align: center
:target: ../_images/architecture.svg
```

## Components

The following table describes each component and its role in the system:

| Component | Role |
|---|---|
| **Gateway** | Control-plane API that coordinates sandbox lifecycle and state, acts as the auth boundary, and brokers requests across the platform. |
Expand All @@ -39,7 +42,7 @@ OpenShell runs as a [k3s](https://k3s.io/) Kubernetes cluster inside a Docker co

Every outbound connection from agent code passes through the same decision path:

1. The agent process opens an outbound connection (API call, package install, git clone, etc.).
1. The agent process opens an outbound connection (API call, package install, git clone, and so on).
2. The proxy inside the sandbox intercepts the connection and identifies which binary opened it.
3. The proxy queries the policy engine with the destination, port, and calling binary.
4. The policy engine returns one of three decisions:
Expand All @@ -54,13 +57,15 @@ For REST endpoints with TLS termination enabled, the proxy also decrypts TLS and
OpenShell can run locally or on a remote host. The architecture is identical in both cases — only the Docker container location changes.

- **Local**: the k3s cluster runs inside Docker on your workstation. The CLI provisions it automatically on first use.
- **Remote**: the cluster runs on a remote host. Deploy with `openshell gateway start --remote user@host`. For example, connect to your DGX Spark
- **Remote**: the cluster runs on a remote host. Deploy with `openshell gateway start --remote user@host`. For example, connect to your DGX Spark.
```console
$ openshell gateway start --remote <username>@<spark-SSID>.local
$ openshell gateway start --remote <username>@<spark-ssid>.local
$ openshell status
```

## Next Steps

- [Quickstart](get-started.md): Create your first sandbox.
- [Sandboxes](../sandboxes/index.md): Learn how OpenShell enforces isolation across all protection layers.
Continue with one of the following:

- To create your first sandbox, refer to the [Quickstart](../get-started/quickstart.md).
- To learn how OpenShell enforces isolation across all protection layers, refer to [Sandboxes](../sandboxes/index.md).
22 changes: 14 additions & 8 deletions docs/about/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ The table below summarizes common failure modes and how OpenShell mitigates them

## Protection Layers at a Glance

OpenShell applies defense in depth across four policy domains:
OpenShell applies defense in depth across the following policy domains.

| Layer | What it protects | When it applies |
|---|---|---|
Expand All @@ -51,15 +51,21 @@ For details, refer to [Built-in Default Policy](../sandboxes/index.md#built-in-d

## Common Use Cases

- **Secure coding agents**: Run Claude Code, OpenCode, or OpenClaw with constrained file and network access.
- **Private enterprise development**: Route inference to self-hosted or private backends while keeping sensitive context under your control.
- **Compliance and audit**: Treat policy YAML as version-controlled security controls that can be reviewed and audited.
- **Reusable environments**: Use community sandbox images or bring your own containerized runtime.
OpenShell supports a range of agent deployment patterns.

| Use Case | Description |
|-----------------------------|----------------------------------------------------------------------------------------------------------|
| Secure coding agents | Run Claude Code, OpenCode, or OpenClaw with constrained file and network access. |
| Private enterprise development | Route inference to self-hosted or private backends while keeping sensitive context under your control. |
| Compliance and audit | Treat policy YAML as version-controlled security controls that can be reviewed and audited. |
| Reusable environments | Use community sandbox images or bring your own containerized runtime. |

---

## Next Steps

- [Architecture Overview](architecture.md): Understand the components that make up the OpenShell runtime.
- [Quickstart](get-started.md): Install the CLI and create your first sandbox.
- [Sandboxes](../sandboxes/index.md): Learn how OpenShell enforces isolation across all protection layers.
Explore these topics to go deeper:

- To understand the components that make up the OpenShell runtime, refer to the [Architecture Overview](architecture.md).
- To install the CLI and create your first sandbox, refer to the [Quickstart](../get-started/quickstart.md).
- To learn how OpenShell enforces isolation across all protection layers, refer to [Sandboxes](../sandboxes/index.md).
10 changes: 9 additions & 1 deletion docs/about/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,17 @@

# Release Notes

This page covers the highlights of each OpenShell release.
Track the latest changes and improvements to NVIDIA OpenShell.
This page covers the highlights of the release.
For more details, refer to the [OpenShell GitHub Releases](https://github.com/NVIDIA/OpenShell/releases).

## 0.1.0

This is the first release of NVIDIA OpenShell. It introduces sandboxed AI agent execution with kernel-level isolation, policy enforcement, and credential management.

### Highlights

- Introduces sandboxed AI agent execution with kernel-level isolation, policy enforcement, and credential management.
- Introduces the `openshell` CLI for creating, managing, and customizing sandboxes.
- Introduces the `openshell-gateway` service for managing the gateway and sandboxes.
- Introduces the `openshell-sandbox` service for running the sandboxed agent.
42 changes: 28 additions & 14 deletions docs/about/get-started.md → docs/get-started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,12 @@ Before you begin, make sure you have:

- Python 3.12 or later
- [uv](https://docs.astral.sh/uv/) installed
- Docker Desktop running on your machine <!-- TODO: add compatible version -->
- Docker Desktop running on your machine

## Install the OpenShell CLI

Install the `openshell` package into a virtual environment.

Activate your virtual environment:

```bash
Expand All @@ -45,7 +47,7 @@ uv pip install openshell

## Connect to a Remote Gateway (Optional)

If you're running locally, skip this step — the CLI creates a gateway automatically when you create your first sandbox.
If you're running locally, skip this step. The OpenShell CLI creates a gateway automatically when you create your first sandbox.

:::::{tab-set}

Expand All @@ -55,7 +57,7 @@ If you're running locally, skip this step — the CLI creates a gateway automati
Deploy an OpenShell gateway on Brev by hitting **Deploy** on the [OpenShell Launchable](https://brev.nvidia.com/launchable/deploy/now?launchableID=env-3AaK9NmCzWp3pVyUDNNFBt805FT).
:::

Once the instance is running, find the gateway URL in the Brev console under **Using Secure Links**. Copy the shareable URL for **port 8080** — this is the gateway endpoint.
After the instance is running, find the gateway URL in the Brev console under **Using Secure Links**. Copy the shareable URL for **port 8080** — this is the gateway endpoint.

```console
$ openshell gateway add https://<your-port-8080-url>.brevlab.com
Expand All @@ -67,7 +69,7 @@ $ openshell status
::::{tab-item} DGX Spark

:::{note}
Set up your Spark with NVIDIA Sync first, or make sure SSH access is configured (e.g., SSH keys added to the host).
Set up your Spark with NVIDIA Sync first, or make sure SSH access is configured (such as SSH keys added to the host).
:::

Deploy to a DGX Spark machine over SSH:
Expand All @@ -77,7 +79,7 @@ $ openshell gateway start --remote <username>@<spark-ssid>.local
$ openshell status
```

Once `openshell status` shows the gateway as healthy, all subsequent commands route through the SSH tunnel.
After `openshell status` shows the gateway as healthy, all subsequent commands route through the SSH tunnel.

::::

Expand All @@ -90,6 +92,9 @@ Choose the tab that matches your agent:
::::{tab-set}

:::{tab-item} Claude Code

Run the following command to create a sandbox with Claude Code:

```console
$ openshell sandbox create -- claude
```
Expand All @@ -98,35 +103,44 @@ The CLI prompts you to create a provider from local credentials — type `yes` t
:::

:::{tab-item} OpenCode

Run the following command to create a sandbox with OpenCode:

```console
$ openshell sandbox create -- opencode
```

The CLI prompts you to create a provider from local credentials — type `yes` to continue. If `OPENAI_API_KEY` or `OPENROUTER_API_KEY` is set in your environment, it is picked up automatically. If not, you can configure it from inside the sandbox after it launches.
The CLI prompts you to create a provider from local credentials. Type `yes` to continue. If `OPENAI_API_KEY` or `OPENROUTER_API_KEY` is set in your environment, it is picked up automatically. If not, you can configure it from inside the sandbox after it launches.
:::

:::{tab-item} Codex

Run the following command to create a sandbox with Codex:

```console
$ openshell sandbox create -- codex
```

The CLI prompts you to create a provider from local credentials — type `yes` to continue. If `OPENAI_API_KEY` is set in your environment, it is picked up automatically. If not, you can configure it from inside the sandbox after it launches.
The CLI prompts you to create a provider from local credentials. Type `yes` to continue. If `OPENAI_API_KEY` is set in your environment, it is picked up automatically. If not, you can configure it from inside the sandbox after it launches.
:::

:::{tab-item} Community Sandbox
:::{tab-item} OpenClaw

Run the following command to create a sandbox with OpenClaw:

```console
$ openshell sandbox create --from openclaw
```

The `--from` flag pulls a pre-built sandbox definition from the [OpenShell Community](https://github.com/NVIDIA/OpenShell-Community) catalog. Each definition bundles a container image, a tailored policy, and optional skills into a single package.
:::

::::
:::{tab-item} Community Sandbox

## Next Steps
You can use the `--from` flag to pull other OpenShell sandbox images from the [NVIDIA Container Registry](https://registry.nvidia.com/). For example, to pull the `base` image, run the following command:

You now have a working sandbox! From here, you can:
```console
$ openshell sandbox create --from base
```

- **Follow a guided tutorial** — set up scoped GitHub repo access in {doc}`/tutorials/github-sandbox`.
- **Learn how sandboxes work** — see {doc}`/sandboxes/create-and-manage` for the full lifecycle.
- **Write your own policies** — see {doc}`/sandboxes/policies` for custom access rules.
:::
28 changes: 15 additions & 13 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ content:
[![License](https://img.shields.io/badge/License-Apache_2.0-blue)](https://github.com/NVIDIA/OpenShell/blob/main/LICENSE)
[![PyPI](https://img.shields.io/badge/PyPI-openshell-orange?logo=pypi)](https://pypi.org/project/openshell/)

OpenShell is the safe, private runtime for autonomous AI agents. It provides sandboxed execution environments
NVIDIA OpenShell is the safe, private runtime for autonomous AI agents. It provides sandboxed execution environments
that protect your data, credentials, and infrastructure. Agents run with exactly the permissions they need and
nothing more, governed by declarative policies that prevent unauthorized file access, data exfiltration, and
uncontrolled network activity.
Expand Down Expand Up @@ -70,13 +70,15 @@ Install the CLI and create your first sandbox in two commands.
grid-area: 1 / 1;
white-space: nowrap;
opacity: 0;
animation: nc-cycle 6s ease-in-out infinite;
animation: nc-cycle 12s ease-in-out infinite;
}
.nc-swap > span:nth-child(2) { animation-delay: 3s; }
.nc-swap > span:nth-child(3) { animation-delay: 6s; }
.nc-swap > span:nth-child(4) { animation-delay: 9s; }
@keyframes nc-cycle {
0%, 5% { opacity: 0; }
10%, 42% { opacity: 1; }
50%, 100% { opacity: 0; }
0%, 3% { opacity: 0; }
5%, 20% { opacity: 1; }
25%, 100% { opacity: 0; }
}
.nc-hl { color: #76b900; font-weight: 600; }
.nc-cursor {
Expand All @@ -98,12 +100,12 @@ Install the CLI and create your first sandbox in two commands.
</div>
<div class="nc-term-body">
<div><span class="nc-ps">$ </span>uv pip install openshell</div>
<div><span class="nc-ps">$ </span>openshell sandbox create <span class="nc-swap"><span>-- <span class="nc-hl">claude</span></span><span>--from <span class="nc-hl">openclaw</span></span></span><span class="nc-cursor"></span></div>
<div><span class="nc-ps">$ </span>openshell sandbox create <span class="nc-swap"><span>-- <span class="nc-hl">claude</span></span><span>--from <span class="nc-hl">openclaw</span></span><span>-- <span class="nc-hl">opencode</span></span><span>-- <span class="nc-hl">codex</span></span></span><span class="nc-cursor"></span></div>
</div>
</div>
```

Refer to the [Quickstart](about/get-started.md) for more details.
Refer to the [Quickstart](get-started/quickstart.md) for more details.

---

Expand All @@ -123,7 +125,7 @@ Learn about OpenShell and its capabilities.
:::

:::{grid-item-card} Quickstart
:link: about/get-started
:link: get-started/quickstart
:link-type: doc

Install the CLI and create your first sandbox in two commands.
Expand All @@ -132,8 +134,8 @@ Install the CLI and create your first sandbox in two commands.
{bdg-secondary}`Tutorial`
:::

:::{grid-item-card} Tutorials
:link: tutorials/index
:::{grid-item-card} Set Up a Sandbox with GitHub Repo Access
:link: tutorials/github-sandbox
:link-type: doc

End-to-end guides: GitHub repo access, custom policies, and more.
Expand Down Expand Up @@ -193,8 +195,8 @@ Release Notes <about/release-notes>
:caption: Get Started
:hidden:

Quickstart <about/get-started>
tutorials/index
Quickstart <get-started/quickstart>
GitHub Sandbox <tutorials/github-sandbox>
```

```{toctree}
Expand Down Expand Up @@ -223,6 +225,7 @@ inference/configure
reference/cli
reference/default-policy
reference/policy-schema
reference/support-matrix
```

```{toctree}
Expand All @@ -231,4 +234,3 @@ reference/policy-schema

resources/eula
```

16 changes: 10 additions & 6 deletions docs/inference/configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

# Configure Inference Routing

This page covers the managed local inference endpoint (`https://inference.local`). External inference endpoints go through sandbox `network_policies` — see [Network Access Rules](/sandboxes/index.md#network-access-rules) for details.
This page covers the managed local inference endpoint (`https://inference.local`). External inference endpoints go through sandbox `network_policies` — refer to [Network Access Rules](/sandboxes/index.md#network-access-rules) for details.

The configuration consists of two values:

Expand Down Expand Up @@ -68,6 +68,8 @@ $ openshell inference set \

## Step 3: Verify the Active Config

Confirm that the provider and model are set correctly:

```console
$ openshell inference get
provider: nvidia-prod
Expand All @@ -91,7 +93,7 @@ $ openshell inference update --provider openai-prod

## Use It from a Sandbox

Once inference is configured, code inside any sandbox can call `https://inference.local` directly:
After inference is configured, code inside any sandbox can call `https://inference.local` directly:

```python
from openai import OpenAI
Expand Down Expand Up @@ -130,7 +132,9 @@ A successful response confirms the privacy router can reach the configured backe

## Next Steps

- **How does inference routing work?** See {doc}`index` for the interception flow and supported API patterns.
- **Need to control external endpoints?** See [Network Access Rules](/sandboxes/index.md#network-access-rules).
- **Managing provider records?** See {doc}`../sandboxes/providers`.
- **CLI reference?** See {doc}`../reference/cli` for `openshell inference` commands.
Explore related topics:

- To understand the inference routing flow and supported API patterns, refer to {doc}`index`.
- To control external endpoints, refer to [Network Access Rules](/sandboxes/index.md#network-access-rules).
- To manage provider records, refer to {doc}`../sandboxes/providers`.
- To view `openshell inference` commands, refer to {doc}`../reference/cli`.
8 changes: 5 additions & 3 deletions docs/inference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ OpenShell handles inference in two ways:

| Path | How It Works |
|---|---|
| **External endpoints** | Traffic to hosts like `api.openai.com` or `api.anthropic.com` is treated like any other outbound request — allowed or denied by `network_policies`. See [Network Access Rules](/sandboxes/index.md#network-access-rules). |
| **External endpoints** | Traffic to hosts like `api.openai.com` or `api.anthropic.com` is treated like any other outbound request — allowed or denied by `network_policies`. Refer to [Network Access Rules](/sandboxes/index.md#network-access-rules). |
| **`inference.local`** | A special endpoint exposed inside every sandbox for inference that should stay local to the host for privacy and security. The {doc}`privacy router </about/architecture>` strips the original credentials, injects the configured backend credentials, and forwards to the managed model endpoint. |

## How `inference.local` Works
Expand Down Expand Up @@ -57,5 +57,7 @@ Requests to `inference.local` that do not match the configured provider's suppor

## Next Steps

- **Ready to configure?** Set up the backend behind `inference.local` in {doc}`configure`.
- **Need to control external endpoints?** See [Network Access Rules](/sandboxes/index.md#network-access-rules).
Continue with one of the following:

- To set up the backend behind `inference.local`, refer to {doc}`configure`.
- To control external endpoints, refer to [Network Access Rules](/sandboxes/index.md#network-access-rules).
Loading
Loading