feat(sandbox): switch device plugin to CDI injection mode#503
Open
feat(sandbox): switch device plugin to CDI injection mode#503
Conversation
6840a21 to
6682d7d
Compare
6682d7d to
9c39785
Compare
pimlock
reviewed
Mar 23, 2026
| serde_json::json!(GPU_RUNTIME_CLASS_NAME), | ||
| ); | ||
| } else if !template.runtime_class_name.is_empty() { | ||
| if !template.runtime_class_name.is_empty() { |
Collaborator
There was a problem hiding this comment.
I think this validation needs to be updated as well, since the runtime class is no longer going to be set:
OpenShell/crates/openshell-server/src/sandbox/mod.rs
Lines 130 to 147 in ef196db
Member
Author
There was a problem hiding this comment.
That's a good point. The tests pass in this case because the nvidia runtime class still exists on a GPU-enabled k3s cluster, but this is not a requirement for running GPU-eabled sandboxes.
(Note that we do currently still rely on the runtimeClass to deploy the NVIDIA Device Plugin with GPU access).
Member
Author
There was a problem hiding this comment.
I have removed the runtime class check in validate_gpu_support.
da63c43 to
590e002
Compare
6 tasks
Configure the NVIDIA device plugin to use deviceListStrategy=cdi-cri so that GPU devices are injected via direct CDI device requests in the CRI. Sandbox pods now only require the nvidia.com/gpu resource request — runtimeClassName is no longer set on GPU pods. Signed-off-by: Evan Lezar <elezar@nvidia.com>
Signed-off-by: Evan Lezar <elezar@nvidia.com>
Using index-based device IDs improves compatibility across platforms including Jetson/Tegra-based and WSL2-based system. Signed-off-by: Evan Lezar <elezar@nvidia.com>
For newer NVIDIA Container Toolkit versions, the components installed through the nvidia-container-toolkit, libnvidia-container-tools, and libnvidia-container1 packages are considered legacy. In CDI mode -- or when native CDI is used -- only the nvidia-container-toolkit-base package is required with the notable components being: * nvidia-ctk - The general purpose NVIDIA Container Toolkit CLI. It includes functionality such as nvidia-ctk cdi generate to generate CDI specifications and nvidia-ctk cdi list to show available CDI devices. * nvidia-cdi-hook - Implements specific container lifecycle hooks used to ensure that a container is set up correctly to allow GPU access after device nodes and driver files are injected using CDI. This CLI is aliased by the `nvidia-ctk hook` subcommand. * nvidia-container-runtime - As a wrapper for runc to add GPU support in environments where direct CDI device requests are not possible. This includes k3s, where the nvidia RuntimeClass is added automatically if thie nvidia-container-runtime is detected and used to ensure the injection of device nodes and libraries for the k8s-device-plugin containers. This change also renames the Docker build stage to nvidia-container-toolkit explicitly for clarity. Signed-off-by: Evan Lezar <elezar@nvidia.com>
a063051 to
9555515
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Configure the NVIDIA device plugin to use
deviceListStrategy: cdi-criso GPU devices are injected via direct CDI device requests in the CRI. Sandbox pods now only neednvidia.com/gpu: 1in their resource limits —runtimeClassNameis no longer set on GPU pods.Related Issue
Related to #398
Changes
deploy/docker/Dockerfile.images: remove unneeded NVIDIA Container Toolkit components from gateway image.deploy/kube/gpu-manifests/nvidia-device-plugin-helmchart.yaml: adddeviceListStrategy: cdi-cri,cdi.nvidiaHookPath, andnvidiaDriverRoot: "/"to Helm valuescrates/openshell-server/src/sandbox/mod.rs: removeruntimeClassNameinsertion for GPU pods in bothsandbox_template_to_k8s()andinject_pod_template(); add unit test asserting CDI path sets noruntimeClassNamearchitecture/gateway-single-node.md: update GPU Enablement section to document CDI injection mode.agents/skills/debug-openshell-cluster/SKILL.md: add Step 8 with CDI-specific diagnostics (nvidia-ctk cdi list, device plugin logs, CDI spec files)Testing
mise run pre-commitpassesChecklist