Skip to content

Use CPU/GPU extras with uv sources for torch installation#604

Open
RajdeepKushwaha5 wants to merge 6 commits intomllam:mainfrom
RajdeepKushwaha5:refactor/uv-torch-extras
Open

Use CPU/GPU extras with uv sources for torch installation#604
RajdeepKushwaha5 wants to merge 6 commits intomllam:mainfrom
RajdeepKushwaha5:refactor/uv-torch-extras

Conversation

@RajdeepKushwaha5
Copy link
Copy Markdown
Contributor

Describe your changes

Move torch from direct [project] dependencies to [project.optional-dependencies] with mutually exclusive cpu and gpu extras. Add [tool.uv.sources], [[tool.uv.index]], and [tool.uv] conflicts so that uv sync --extra cpu (or --extra gpu) installs torch from the correct PyTorch index in a single command.

Simplify the uv CI path from manual uv venv + uv pip install torch + uv pip install -e . to uv sync --extra ${{ matrix.device }} --group dev. The pip CI path retains the dry-run hack since pip cannot read [tool.uv.sources]. README install instructions updated accordingly.

Issue Link

closes #600

Type of change

  • 🐛 Bug fix (non-breaking change that fixes an issue)
  • ✨ New feature (non-breaking change that adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • 📖 Documentation (Addition or improvements to documentation)

Checklist before requesting a review

  • My branch is up-to-date with the target branch - if not update your fork with the changes from the target branch (use pull with --rebase option if possible).
  • I have performed a self-review of my code
  • For any new/modified functions/classes I have added docstrings that clearly describe its purpose, expected inputs and returned values
  • I have placed in-line comments to clarify the intent of any hard-to-understand passages of my code
  • I have updated the README to cover introduced code changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have given the PR a name that clearly describes the change, written in imperative form (context).
  • I have requested a reviewer and an assignee (assignee is responsible for merging). This applies only if you have write access to the repo, otherwise feel free to tag a maintainer to add a reviewer and assignee.

Checklist for reviewers

Each PR comes with its own improvements and flaws. The reviewer should check the following:

  • the code is readable
  • the code is well tested
  • the code is documented (including return types and parameters)
  • the code is easy to maintain

Author checklist after completed review

  • I have added a line to the CHANGELOG describing this change, in a section
    reflecting type of change (add section where missing):
    • maintenance: added entry under Maintenance in CHANGELOG.md

Copilot AI review requested due to automatic review settings April 15, 2026 03:01
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR restructures how PyTorch (torch) is installed by introducing mutually exclusive cpu/gpu extras and using uv source routing to automatically select the correct PyTorch index, simplifying both CI and contributor installation flows.

Changes:

  • Move torch from base dependencies to cpu/gpu optional extras and add uv index/source routing plus extra conflicts.
  • Simplify the uv CI installation path to uv sync --extra ${{ matrix.device }} --group dev.
  • Update README installation guidance and add a Maintenance entry to the changelog.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
pyproject.toml Adds cpu/gpu extras for torch and configures uv indexes/sources with conflict rules.
README.md Updates installation instructions to use uv sync --extra cpu/gpu and clarifies pip’s manual torch install.
CHANGELOG.md Documents the maintenance change for torch installation approach.
.github/workflows/install-and-test.yml Updates CI to use uv sync for uv installs and adjusts the pip dry-run logic.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Collaborator

@sadamov sadamov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good overall. The extras + [tool.uv.sources] approach is the right solution for uv sync. A few things to fix:

This PR has a big! side-effect, changing the cuda version from 11.1 to 12.8 during install instructions. This should be made very clear to everyone and also discussed at a future dev-meeting.

@mllam mllam deleted a comment from Copilot AI Apr 21, 2026
@mllam mllam deleted a comment from Copilot AI Apr 21, 2026
@mllam mllam deleted a comment from Copilot AI Apr 21, 2026
@mllam mllam deleted a comment from Copilot AI Apr 21, 2026
Comment thread pyproject.toml
Comment on lines 31 to 32
"plotly>=5.15.0",
"torch>=2.3.0",
"torch-geometric==2.3.1",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch must stay in core dependencies (CRITICAL)

Removing torch from [project.dependencies] means pip install neural-lam (or uv sync without an extra) produces a runtime ImportError. The extras should control the variant, not make torch optional.

Suggested change
"plotly>=5.15.0",
"torch>=2.3.0",
"torch-geometric==2.3.1",
"plotly>=5.15.0",
"torch>=2.3.0",
"torch-geometric==2.3.1",

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for some reason i cant get this inline suggestion to display properly, sorry

Comment thread pyproject.toml
Comment on lines +50 to +51
[tool.uv]
conflicts = [[{ extra = "cpu" }, { extra = "gpu" }]]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worth noting here that --torch-backend is not available for uv sync, so readers understand why the extras approach is used rather than a simpler flag.

Suggested change
[tool.uv]
conflicts = [[{ extra = "cpu" }, { extra = "gpu" }]]
[tool.uv]
# uv-specific configuration for PyTorch CPU/GPU variant selection.
# NOTE: --torch-backend is only supported in `uv pip`, not `uv sync`.
# Tracking issue for project-interface support: https://github.com/astral-sh/uv/issues/12994
# Until then, we use extras + explicit index routing for `uv sync`.
conflicts = [[{ extra = "cpu" }, { extra = "gpu" }]]

Comment thread .github/workflows/install-and-test.yml Outdated
Comment thread .github/workflows/install-and-test.yml Outdated
Comment thread README.md Outdated
Comment thread README.md Outdated
@sadamov
Copy link
Copy Markdown
Collaborator

sadamov commented Apr 21, 2026

Nit: name : has a spurious space (line 46 of the updated workflow file). Since this line is between diff hunks it cannot be an inline suggestion, but it should be fixed:

      - name: Install uv (if applicable)

Keep torch as a required core dependency, and add cpu/gpu optional
extras alongside [tool.uv.sources], [[tool.uv.index]], and [tool.uv]
conflicts so that uv routes each extra to the correct PyTorch index
URL and the extras stay mutually exclusive.

Simplify the uv CI path from manual venv+pip install to a single
uv sync --extra <device> --group dev command. The pip CI path keeps
the dry-run hack since pip cannot read [tool.uv.sources]. Update
README install instructions accordingly.

Closes mllam#600
@RajdeepKushwaha5 RajdeepKushwaha5 force-pushed the refactor/uv-torch-extras branch from 1daef58 to fb8f4eb Compare April 21, 2026 11:16
@RajdeepKushwaha5
Copy link
Copy Markdown
Contributor Author

Pushed fb8f4eb addressing all review comments. Summary of changes:

pyproject.toml

  • Restored torch>=2.3.0 to [project.dependencies] so pip install neural-lam (and uv sync without an extra) keeps torch as a hard requirement. The cpu/gpu extras now only select the variant.
  • Added the --torch-backend / uv issue #12994 note above [tool.uv].

.github/workflows/install-and-test.yml

  • Rewrote the header comment per your suggestion (torch framed as required, extras select variant, --torch-backend note, pip routing explanation, hosts block preserved).
  • Reverted the pip dry-run argument from ".[${{ matrix.device }}]" back to "." — pip ignores [tool.uv.sources] and torch is in core deps now, so the extra is unnecessary.
  • Fixed the name : whitespace on the Install uv step.

README.md

  • Replaced the **regarding torch installation** paragraph with your shorter wording that makes the hard requirement explicit.
  • Expanded the uv install steps with the post-install uv pip install torch --torch-backend auto recipe for users on a CUDA version other than 12.8.

Commit message

  • Updated to reflect that torch stays in core deps (the previous wording said "Move torch from direct dependencies to optional-dependencies" which is no longer accurate).

@sadamov sadamov self-requested a review April 25, 2026 18:00
Copy link
Copy Markdown
Collaborator

@sadamov sadamov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the round of revisions in fb8f4eb, all my previous comments are addressed. Three things left, and I'd like all of them in this PR rather than as follow-ups so we land the lockfile + cache-key story coherently with #606:

  1. CHANGELOG: link the PR, split into two entries to surface the CUDA bump.
  2. Commit uv.lock and switch CI to --locked. This is the big one.
  3. README: lead the CUDA-swap recipe with --extra cpu (smaller download) and clarify the swap is reverted by the next uv sync.

Details inline.

Comment thread CHANGELOG.md Outdated
Comment thread .github/workflows/install-and-test.yml Outdated
Comment thread README.md Outdated
@sadamov sadamov self-requested a review April 28, 2026 16:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Use CPU/GPU extras to simplify torch installation

3 participants