From b8df0efc9106278ac9375c48734af38b3b43be29 Mon Sep 17 00:00:00 2001 From: mselensky Date: Tue, 3 Mar 2026 12:07:00 -0700 Subject: [PATCH 1/5] initial pixi docs --- .../Environment/Customization/pixi.md | 135 ++++++++++++++++++ mkdocs.yml | 1 + 2 files changed, 136 insertions(+) create mode 100644 docs/Documentation/Environment/Customization/pixi.md diff --git a/docs/Documentation/Environment/Customization/pixi.md b/docs/Documentation/Environment/Customization/pixi.md new file mode 100644 index 000000000..272b3b36d --- /dev/null +++ b/docs/Documentation/Environment/Customization/pixi.md @@ -0,0 +1,135 @@ +# What is Pixi? + +[Pixi](https://pixi.prefix.dev/latest/) is a package management tool that attempts to unify the workflows of existing package managers such as [conda](conda.md) or [pip](https://pip.pypa.io/en/stable/) for a smoother and more robust user experience. Pixi uses the [rattler](https://github.com/prefix-dev/rattler-build/tree/main) library, a high-performance implementation of core conda functionalities (such as dependency tree solving) written in [Rust](https://rust-lang.org), leading to Pixi being significantly faster than "traditional" or "pure" conda. Pixi facilitates the management of project-specific environments which may contain a mix of packages from Python and other languages. Pixi handles both *environment creation* and *package installation*, replacing the need to use conda for the former and pip for the latter. + +# Using Pixi on Kestrel + +Pixi is available as a module on both the CPU and GPU nodes on Kestrel: + +``` +$ ml help pixi +------------------ Module Specific Help for "pixi/0.65.0" ------------------ +Name : Pixi +Version: 0.65.0 (built 27 February 2026) +Source : https://github.com/prefix-dev/pixi +Docs : https://pixi.prefix.dev + +Pixi is a cross-platform, multi-language package manager and workflow tool +built on the foundation of the conda ecosystem. It provides developers with +an exceptional experience similar to popular package managers like cargo or +npm, but for any language. +``` + +Pixi is mainly designed to create environments for a specific project/working directory. Two minimal examples (one for CPU nodes, and another for GPU nodes) of creating a Pixi environment and running a script from each are provided below. NLR HPC users are encouraged to consult the [Pixi documentation](https://pixi.prefix.dev) for more information on how to get the most from Pixi. + +## Minimal environment example on Kestrel - CPU + +The following scripts represent a minimal example of using the Pixi module to 1. create a simple Pixi environment containing the `numpy` package (named `numpy-workspace`) and then 2. execute a Python script that performs a matrix multiplication 10 times (`numpy-test.py`). **Be sure to run this on a CPU node:** + +??? "Example: Using Pixi to create an environment and execute `numpy-test.py`" + Ensure that `numpy-test.py` (found in the next drop-down menu) exists one directory above `numpy-workspace` for this example. + + ```bash + #!/bin/bash + # Load Pixi module + ml pixi + # Initialize Pixi environment + pixi init numpy-workspace + # Note that we navigate to the Pixi environment folder to add packages and eventually execute the Python script + cd numpy-workspace + # Add numpy and Python as dependencies + pixi add numpy python=3.11 + # The Python script we wish to execute is found one directory above 'numpy-workspace' + pixi run python ../numpy-test.py + + # Optional - cleanup numpy-workspace and PIXI_CACHE_DIR + #echo "Removing numpy-workspace..." + #cd .. && rm -rf numpy-workspace + #echo "Removing PIXI_CACHE_DIR..." + #rm -rf $PIXI_CACHE_DIR + ``` + +??? "`numpy-test.py`" + ```python + import numpy as np + from time import time + from time import sleep + import os + + print(f"Running Python script using the Pixi environment '{os.getcwd()}'") + + # create random arrays as input data + asize = pow(10, 6) + array_a = np.float32(np.random.rand(asize)) + array_b = np.float32(np.random.rand(asize)) + array_c = np.float32(np.random.rand(asize)) + + matrix_a = ([array_a], [array_b], [array_c]) + matrix_b = ([array_c], [array_b], [array_a]) + + # numpy - CPU + nloops = 10 + t0 = time() + for i in np.arange(nloops): + np.multiply(matrix_a, matrix_b) + cpu_time = time()-t0 + print("numpy on CPU required", round(cpu_time, 2), "seconds for multiplying two matrices each of size", 3*asize, "a total number of", nloops, "times.") + ``` + +Note that the Python script intended to be run by this environment (`numpy-test.py`) is executed from the `numpy-workspace` folder via `pixi run python ../numpy-test.py`. After the environment is created and you navigate to the environment folder, providing the call to Python with the `pixi run ...` prefix will use the version of Python and its associated packages from the `numpy-workspace` environment. + +## Minimal environment example on Kestrel - GPU + +The following script represents a minimal example of using the Pixi module to 1. create a simple Pixi environment containing a GPU-enabled version of `torch` (named `numpy-workspace`) and then 2. run a simple Python command that verifies whether this environment's `torch` can see a GPU device. **Be sure to run this on a GPU node:** + +??? "Example: Using Pixi to create a GPU-enabled PyTorch environment on Kestrel" + ``` + #!/bin/bash + # Load Pixi module + ml pixi + # Initialize Pixi environment + pixi init cuda-workspace + # Note that we navigate to the Pixi environment folder to add packages and eventually execute the Python script + cd cuda-workspace + + # Manually create pixi.toml + cat < pixi.toml + [workspace] + channels = ["https://prefix.dev/conda-forge"] + name = "pytorch-conda-forge" + platforms = ["linux-64",] + + [system-requirements] + cuda = "12.4" + + [dependencies] + pytorch-gpu = "*" + cuda-version = ">=12.4" + cowpy = "*" + python = "3.11.*" + EOF + pixi run cowpy "MUUUUUUDA!" + pixi run python -c "import torch; print('Can pixi find a GPU? -->', torch.cuda.is_available(), '\n', 'Using CUDA version:', torch.version.cuda)" + ``` + +Note that in this example, we specify `cuda = "12.4"` under `[system-requirements]` in the `pixi.toml`. This will allow Pixi to install a GPU-enabled version of PyTorch; without this, Pixi would install a CPU-only version of `torch`. Additionally, when creating environments from a custom `pixi.toml`, note that anything under `[dependencies]` is functionally equivalent to `pixi add ... ` as written in the [CPU example above](#minimal-environment-example-on-kestrel---cpu). At the time of writing, the GPU drivers on Kestrel are compatible with `cuda/12.4+`, so we pin `cuda-version = ">=12.4"` as a dependency accordingly as an extra insurance that we pull a compatible version of PyTorch. + +!!! warning "A note on performant, multi-node PyTorch on Kestrel's GPU nodes" + Note that installing PyTorch with the aim for good communication performance across multiple GPU nodes Kestrel requires special considerations that are not covered in this page. See [our dedicated documentation on the topic](../../Machine_Learning/index.md#installing-pytorch-on-kestrel-with-multi-node-and-gpu-support) for more information. + +## Package caching location + +On Kestrel, the Pixi modules are designed to cache downloaded packages to `/scratch/${USER}/.cache/rattler` by default. This saves users storage space in their `/home` or `/projects` folders, though this may be overridden by users by modifying and exporting the `PIXI_CACHE_DIR` environment variable after loading the module. + +To save space in your personal `/scratch`, you may safely run `rm -rf /scratch/${USER}/.cache/rattler` at any time to clear this cache directory. + +## Tips and tricks for using Pixi on Kestrel + +**Coming soon!** + +# Useful links + +- [Managing Python Environments with Pixi-From Laptop to HPC](https://nrel-my.sharepoint.com/:v:/g/personal/chschwin_nrel_gov/IQAr0Zstq-bQSZ5nkfcd-eXDAT_GOjhkcPXWHa6S2XRympU) (NLR HPC Tutorial series - requires access to CSC Tutorials Teams channel) +- [Switching from Conda/Mamba to Pixi](https://pixi.prefix.dev/latest/switching_from/conda/) (external site) +- [PyTorch installation with Pixi](https://pixi.prefix.dev/latest/python/pytorch/) (external site) +- [Building custom packages with Pixi](https://pixi.prefix.dev/latest/build/getting_started/) (external site) \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 88306681a..777a90def 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -71,6 +71,7 @@ nav: - Lmod: Documentation/Environment/lmod.md # - Customization: - Conda: Documentation/Environment/Customization/conda.md + - Pixi: Documentation/Environment/Customization/pixi.md - Building an Application: - Documentation/Environment/Building_Packages/index.md - Documentation/Environment/Building_Packages/acquire.md From f0f48c56dc672279bb9f4ac7ec3c75dc05f10d19 Mon Sep 17 00:00:00 2001 From: Matt Selensky <48727421+mselensky@users.noreply.github.com> Date: Wed, 4 Mar 2026 09:22:46 -0700 Subject: [PATCH 2/5] Update docs/Documentation/Environment/Customization/pixi.md Co-authored-by: Haley Yandt <46908710+yandthj@users.noreply.github.com> --- docs/Documentation/Environment/Customization/pixi.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Documentation/Environment/Customization/pixi.md b/docs/Documentation/Environment/Customization/pixi.md index 272b3b36d..a302e2e22 100644 --- a/docs/Documentation/Environment/Customization/pixi.md +++ b/docs/Documentation/Environment/Customization/pixi.md @@ -80,7 +80,7 @@ Note that the Python script intended to be run by this environment (`numpy-test. ## Minimal environment example on Kestrel - GPU -The following script represents a minimal example of using the Pixi module to 1. create a simple Pixi environment containing a GPU-enabled version of `torch` (named `numpy-workspace`) and then 2. run a simple Python command that verifies whether this environment's `torch` can see a GPU device. **Be sure to run this on a GPU node:** +The following script represents a minimal example of using the Pixi module to 1. create a simple Pixi environment containing a GPU-enabled version of `torch` (named `cuda-workspace`) and then 2. run a simple Python command that verifies whether this environment's `torch` can see a GPU device. **Be sure to run this on a GPU node:** ??? "Example: Using Pixi to create a GPU-enabled PyTorch environment on Kestrel" ``` From 3082f6b9c67a99a62c821cd056a9562d93e2000a Mon Sep 17 00:00:00 2001 From: Matt Selensky <48727421+mselensky@users.noreply.github.com> Date: Wed, 4 Mar 2026 09:23:08 -0700 Subject: [PATCH 3/5] Update docs/Documentation/Environment/Customization/pixi.md Co-authored-by: Haley Yandt <46908710+yandthj@users.noreply.github.com> --- docs/Documentation/Environment/Customization/pixi.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Documentation/Environment/Customization/pixi.md b/docs/Documentation/Environment/Customization/pixi.md index a302e2e22..fc85a79ac 100644 --- a/docs/Documentation/Environment/Customization/pixi.md +++ b/docs/Documentation/Environment/Customization/pixi.md @@ -115,7 +115,7 @@ The following script represents a minimal example of using the Pixi module to 1. Note that in this example, we specify `cuda = "12.4"` under `[system-requirements]` in the `pixi.toml`. This will allow Pixi to install a GPU-enabled version of PyTorch; without this, Pixi would install a CPU-only version of `torch`. Additionally, when creating environments from a custom `pixi.toml`, note that anything under `[dependencies]` is functionally equivalent to `pixi add ... ` as written in the [CPU example above](#minimal-environment-example-on-kestrel---cpu). At the time of writing, the GPU drivers on Kestrel are compatible with `cuda/12.4+`, so we pin `cuda-version = ">=12.4"` as a dependency accordingly as an extra insurance that we pull a compatible version of PyTorch. !!! warning "A note on performant, multi-node PyTorch on Kestrel's GPU nodes" - Note that installing PyTorch with the aim for good communication performance across multiple GPU nodes Kestrel requires special considerations that are not covered in this page. See [our dedicated documentation on the topic](../../Machine_Learning/index.md#installing-pytorch-on-kestrel-with-multi-node-and-gpu-support) for more information. + Note that installing PyTorch with the aim for good communication performance across multiple GPU nodes on Kestrel requires special considerations that are not covered in this page. See [our dedicated documentation on the topic](../../Machine_Learning/index.md#installing-pytorch-on-kestrel-with-multi-node-and-gpu-support) for more information. ## Package caching location From 0874343091f47d71b84be342b2d50bfc73ceb0d8 Mon Sep 17 00:00:00 2001 From: Matt Selensky <48727421+mselensky@users.noreply.github.com> Date: Wed, 4 Mar 2026 09:23:49 -0700 Subject: [PATCH 4/5] Update docs/Documentation/Environment/Customization/pixi.md Co-authored-by: Haley Yandt <46908710+yandthj@users.noreply.github.com> --- docs/Documentation/Environment/Customization/pixi.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Documentation/Environment/Customization/pixi.md b/docs/Documentation/Environment/Customization/pixi.md index fc85a79ac..60497b80a 100644 --- a/docs/Documentation/Environment/Customization/pixi.md +++ b/docs/Documentation/Environment/Customization/pixi.md @@ -119,7 +119,7 @@ Note that in this example, we specify `cuda = "12.4"` under `[system-requirement ## Package caching location -On Kestrel, the Pixi modules are designed to cache downloaded packages to `/scratch/${USER}/.cache/rattler` by default. This saves users storage space in their `/home` or `/projects` folders, though this may be overridden by users by modifying and exporting the `PIXI_CACHE_DIR` environment variable after loading the module. +On Kestrel, the Pixi modules are designed to cache downloaded packages to `/scratch/${USER}/.cache/rattler` by default. This saves storage space in `/home` or `/projects` folders, though this may be overridden by modifying and exporting the `PIXI_CACHE_DIR` environment variable after loading the module. To save space in your personal `/scratch`, you may safely run `rm -rf /scratch/${USER}/.cache/rattler` at any time to clear this cache directory. From 94e0aaf8103c5dd25d749bff6ae6f7debdb2d70b Mon Sep 17 00:00:00 2001 From: Matt Selensky <48727421+mselensky@users.noreply.github.com> Date: Wed, 4 Mar 2026 09:24:07 -0700 Subject: [PATCH 5/5] Update docs/Documentation/Environment/Customization/pixi.md Co-authored-by: Haley Yandt <46908710+yandthj@users.noreply.github.com> --- docs/Documentation/Environment/Customization/pixi.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Documentation/Environment/Customization/pixi.md b/docs/Documentation/Environment/Customization/pixi.md index 60497b80a..b6787638b 100644 --- a/docs/Documentation/Environment/Customization/pixi.md +++ b/docs/Documentation/Environment/Customization/pixi.md @@ -20,7 +20,7 @@ an exceptional experience similar to popular package managers like cargo or npm, but for any language. ``` -Pixi is mainly designed to create environments for a specific project/working directory. Two minimal examples (one for CPU nodes, and another for GPU nodes) of creating a Pixi environment and running a script from each are provided below. NLR HPC users are encouraged to consult the [Pixi documentation](https://pixi.prefix.dev) for more information on how to get the most from Pixi. +Pixi is mainly designed to create environments for a specific project/working directory. Two minimal examples (one for CPU nodes, and another for GPU nodes) of creating a Pixi environment and running a script from each are provided below. Please consult the [Pixi documentation](https://pixi.prefix.dev) for more information on how to get the most from Pixi. ## Minimal environment example on Kestrel - CPU