Use CPU/GPU extras with uv sources for torch installation#604
Use CPU/GPU extras with uv sources for torch installation#604RajdeepKushwaha5 wants to merge 6 commits intomllam:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR restructures how PyTorch (torch) is installed by introducing mutually exclusive cpu/gpu extras and using uv source routing to automatically select the correct PyTorch index, simplifying both CI and contributor installation flows.
Changes:
- Move
torchfrom base dependencies tocpu/gpuoptional extras and adduvindex/source routing plus extra conflicts. - Simplify the
uvCI installation path touv sync --extra ${{ matrix.device }} --group dev. - Update README installation guidance and add a Maintenance entry to the changelog.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
pyproject.toml |
Adds cpu/gpu extras for torch and configures uv indexes/sources with conflict rules. |
README.md |
Updates installation instructions to use uv sync --extra cpu/gpu and clarifies pip’s manual torch install. |
CHANGELOG.md |
Documents the maintenance change for torch installation approach. |
.github/workflows/install-and-test.yml |
Updates CI to use uv sync for uv installs and adjusts the pip dry-run logic. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
726337d to
8a23749
Compare
18a0d3f to
1daef58
Compare
There was a problem hiding this comment.
This looks good overall. The extras + [tool.uv.sources] approach is the right solution for uv sync. A few things to fix:
This PR has a big! side-effect, changing the cuda version from 11.1 to 12.8 during install instructions. This should be made very clear to everyone and also discussed at a future dev-meeting.
| "plotly>=5.15.0", | ||
| "torch>=2.3.0", | ||
| "torch-geometric==2.3.1", |
There was a problem hiding this comment.
torch must stay in core dependencies (CRITICAL)
Removing torch from [project.dependencies] means pip install neural-lam (or uv sync without an extra) produces a runtime ImportError. The extras should control the variant, not make torch optional.
| "plotly>=5.15.0", | |
| "torch>=2.3.0", | |
| "torch-geometric==2.3.1", | |
| "plotly>=5.15.0", | |
| "torch>=2.3.0", | |
| "torch-geometric==2.3.1", |
There was a problem hiding this comment.
for some reason i cant get this inline suggestion to display properly, sorry
| [tool.uv] | ||
| conflicts = [[{ extra = "cpu" }, { extra = "gpu" }]] |
There was a problem hiding this comment.
Worth noting here that --torch-backend is not available for uv sync, so readers understand why the extras approach is used rather than a simpler flag.
| [tool.uv] | |
| conflicts = [[{ extra = "cpu" }, { extra = "gpu" }]] | |
| [tool.uv] | |
| # uv-specific configuration for PyTorch CPU/GPU variant selection. | |
| # NOTE: --torch-backend is only supported in `uv pip`, not `uv sync`. | |
| # Tracking issue for project-interface support: https://github.com/astral-sh/uv/issues/12994 | |
| # Until then, we use extras + explicit index routing for `uv sync`. | |
| conflicts = [[{ extra = "cpu" }, { extra = "gpu" }]] |
|
Nit: - name: Install uv (if applicable) |
Keep torch as a required core dependency, and add cpu/gpu optional extras alongside [tool.uv.sources], [[tool.uv.index]], and [tool.uv] conflicts so that uv routes each extra to the correct PyTorch index URL and the extras stay mutually exclusive. Simplify the uv CI path from manual venv+pip install to a single uv sync --extra <device> --group dev command. The pip CI path keeps the dry-run hack since pip cannot read [tool.uv.sources]. Update README install instructions accordingly. Closes mllam#600
1daef58 to
fb8f4eb
Compare
|
Pushed
Commit message
|
There was a problem hiding this comment.
Thanks for the round of revisions in fb8f4eb, all my previous comments are addressed. Three things left, and I'd like all of them in this PR rather than as follow-ups so we land the lockfile + cache-key story coherently with #606:
- CHANGELOG: link the PR, split into two entries to surface the CUDA bump.
- Commit
uv.lockand switch CI to--locked. This is the big one. - README: lead the CUDA-swap recipe with
--extra cpu(smaller download) and clarify the swap is reverted by the nextuv sync.
Details inline.
…add --locked to CI, fix README CUDA swap recipe
Describe your changes
Move
torchfrom direct[project] dependenciesto[project.optional-dependencies]with mutually exclusivecpuandgpuextras. Add[tool.uv.sources],[[tool.uv.index]], and[tool.uv] conflictsso thatuv sync --extra cpu(or--extra gpu) installs torch from the correct PyTorch index in a single command.Simplify the uv CI path from manual
uv venv+uv pip install torch+uv pip install -e .touv sync --extra ${{ matrix.device }} --group dev. The pip CI path retains the dry-run hack since pip cannot read[tool.uv.sources]. README install instructions updated accordingly.Issue Link
closes #600
Type of change
Checklist before requesting a review
pullwith--rebaseoption if possible).Checklist for reviewers
Each PR comes with its own improvements and flaws. The reviewer should check the following:
Author checklist after completed review
reflecting type of change (add section where missing):