fix: update ribodetector GPU container to modern PyTorch/CUDA#11197
Draft
pinin4fjords wants to merge 1 commit intomasterfrom
Draft
fix: update ribodetector GPU container to modern PyTorch/CUDA#11197pinin4fjords wants to merge 1 commit intomasterfrom
pinin4fjords wants to merge 1 commit intomasterfrom
Conversation
3897f66 to
81405b5
Compare
- Update GPU container from PyTorch 1.11.0 (CUDA 11.1) to PyTorch 2.10.0 (CUDA 12.9). Pin cuda-version>=12,<13 in environment.gpu.yml. - Add containers section to meta.yml with CUDA 12.x (default) and CUDA 11.8 alternatives following the existing platform key convention. - Built via Wave v2 template (seqeralabs/wave#1027). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
81405b5 to
7a865c4
Compare
This was referenced Apr 22, 2026
Merged
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
cuda-version>=12,<13inenvironment.gpu.ymlto keep the solver within supported CUDA versionscontainerssection tometa.ymlwith all platform variants (amd64, arm64, CUDA 12, CUDA 11.8)Context
The old GPU container used PyTorch 1.11.0 because it was the last version whose conda dependencies didn't require the
__cudavirtual package, which is absent on Wave's GPU-less build servers. Newer pytorch-gpu versions all fail to solve withoutCONDA_OVERRIDE_CUDA. Wave now handles this automatically via a two-pass solve: if the first attempt fails with__cudamissing, it retries withCONDA_OVERRIDE_CUDAset (seqeralabs/wave#1027). PyTorch 1.11.0 was the last version that didn't require the__cudavirtual package during the conda solve.With Wave's fix, we can now build containers with current PyTorch. The CUDA 12.x container is the default (covers any NVIDIA driver supporting CUDA 12.0+). A CUDA 11.8 alternative is recorded in
meta.ymlfor users with older drivers.RFC: GPU container variants in
meta.ymlGPU containers are tied to a CUDA major version (full forward compat within each, so only a couple of variants matter). This PR proposes extending the existing
containersblock (see fastqc, multiqc) with CUDA-versioned platform keys so that pipeline developers have pre-built URIs documented when they need to offer users a choice:The
+cuda12/+cuda11suffix convention is new. Open to feedback on the naming.Related
__cudaissue