Skip to content

Deep-learning framework for Engineering AI. Built on transformer building blocks, it delivers the full engineering stack, allowing teams to build, train, and operate industrial simulation models across engineering verticals.

License

Notifications You must be signed in to change notification settings

Emmi-AI/noether

Repository files navigation

noether-framework-logo

Noether Framework

Docs - noether-docs.emmi.ai License: ENLP Static Badge

Tests

Noether is Emmi AI’s open software framework for Engineering AI. Built on transformer building blocks, it delivers the full engineering stack, allowing teams to build, train, and operate industrial simulation models across engineering verticals, eliminating the need for component re-engineering or an in-house deep learning team.

Key Features

  • Modular Transformer Architecture: Built on building blocks optimized for physical systems.
  • Hardware Agnostic: Seamless execution across CPU, MPS (Apple Silicon), and NVIDIA GPUs.
  • Industrial Grade: Designed for high-fidelity industrial simulations and engineering verticals.
  • Ready for Scale: Built-in support for Multi-GPU and SLURM cluster environments.

Table of contents


Installation

It is possible to use the framework either from source or from the pre-built packages.

Pre-requisites

  • install uv as the package manager on your system
  • clone the repo into your desired folder: git clone https://github.com/Emmi-AI/noether.git
  • follow the next steps 🚀

Working with pre-built packages

Installable package is available via pip and can be installed as:

pip install emmiai-noether

To work with the prebuilt PyPi package you have to install relevant PyTorch version beforehand as it is a dependency to build torch-cluster. Install emmiai-noether as follows:

uv pip install torch==2.8.0
uv pip install emmiai-noether==1.0.0 --no-build-isolation torch-cluster

Working with the source code

If you prefer to work with the source code directly without installing a prebuilt package.

Important

If you are running on NVIDIA GPUs or need custom CUDA paths, you must configure your environment variables first. Please follow our Advanced Linux Setup Guide before running the command below.

Create a fresh virtual environment and synchronize the core dependencies:

uv venv && source .venv/bin/activate
uv sync

Note: Initial installation may take several minutes as third-party dependencies are compiled. Duration depends on your hardware and network speed.

Validate your installation by simply running the tests (if something fails with module import errors it means that the installation was incomplete):

pytest -q tests/

if the tests are passed (warnings are okay to be logged) then you're all set and ready to go!

How to clean up and do a fresh installation

You might be in a situation when your venv won't be configured as intended anymore, to fix this:

  • Deactivate existing environment in your terminal by running: deactivate
  • Remove existing .venv (optionally add uv.lock): rm -rf .venv uv.lock
  • [Optional] Clean uv cache: uv cache clean
  • Create a new venv and activate it: uv venv && source .venv/bin/activate
  • [Optional] If deleted, generate a new uv.lock file: uv lock
  • [Optional] If contributor: pre-commit install

Quickstart

You can run a training job immediately using the tutorial configuration. For local development (Mac/CPU), use:

uv run noether-train --hp tutorial/configs/train_shapenet.yaml \
    +experiment/shapenet=upt \
    dataset_root=./data \
    +accelerator=mps \

Learn more about different hardware support here.


Performance Benchmarks

The following benchmarks demonstrate Noether's acceleration compatibility over different hardware using the ShapeNet-Car dataset and the AB-UPT model.

Note

All benchmarks were conducted using FP32 precision to establish a baseline for raw computational performance.

Hardware Config Precision Time Speedup
MacBook Pro M3 Max 1x MPS FP32 135m 1.0x
RTX Pro 4500 (Blackwell) 1x GPU FP32 26m 5.2x
RTX Pro 4500 (Blackwell) 2x GPU FP32 8m 16.8x
NVIDIA H100 1x GPU FP32 5.7m 23.6x

Contributing

Guidelines

We follow these standards:

  • Use typed coding in Python.
  • Write documentation to new features and modules:
    • In case of larger modules make sure to update the documentation that is not autogenerated under the docs/.
    • For smaller features writing a clear API documentation is enough and required.
  • Before committing your changes:
    • Run tests via pytest -q tests/.
    • Ensure that pre-commit hooks are not disabled and are runnable at every commit. We are using ruff as a linter and formatter as well as mypy for type checking. Their configuration is defined in the project's root pyproject.toml.
  • Creating pull requests (PRs) is a mandatory step for any incoming changes that will end up on the main branch.
    • For a PR to be merged at least one core maintainer must give their approval.
    • All test must be green

Pre-commit Hooks

To install pre-commit execute:

pre-commit install

To run the pre-commit configuration on all files, you can use:

pre-commit run --all-files

To run the pre-commit configuration on specific files use:

pre-commit run --files /your/file/path1.py /your/file/path2.py

Third-party contributors

In case of bugs use a corresponding template to create an issue.

In case of feature requests you can submit a PR with clear description of the proposed feature. In that case it must follow the guidelines, or file a feature request as an issue. In that case, we will consider adding it to our backlog.

Configuring IDEs

Pycharm

  • Mark src/ directory as Sources Root (right mouse button click on the folder -> Mark Directory as)
  • Settings -> Editor -> Code Style -> Python -> Tabs and Indents -> change Continuation indent from 8 to 4.
  • Settings -> Editor -> Code Style -> Python -> Spaces -> Around Operators -> Power operator (**)

Working with GitHub

With available GitHub Actions we automate several workflows relevant to our development ranging from buildings the docs to building our modules as wheel files.

To test the desired workflow locally it is recommended to use act.

Note

Make sure to install Docker Desktop as requested by the official documentation.

Install it on a Mac with: brew install act

For example, to check the package release pipeline:

act workflow_dispatch --input version_type=patch -W .github/workflows/release.yml

or to see if tests are runnable:

act pull_request -W .github/workflows/run-tests.yml

Supported systems

Worth noting that we work with macOS and Linux environments thus in case of any issues on Windows, at this time, you have to find workarounds yourself.


Licensing

Note

TL;DR: Research & development ✅| Production deployment ❌ (without commercial license)

The Noether Framework is licensed under a Non-Production License (based on Mistral AI's MNPL). This means you're free to use, modify, and research with the framework, but commercial/production use requires a separate commercial license from Emmi AI.

We're committed to open AI innovation while sustainably growing our business. For commercial licensing, contact us at partner@emmi.ai .

Read the full license here.


Endorsed by research groups from

JKU Linz ETH Zurich UPenn University of Washington TUM Munich Sorbonne University

Citing

If you use Noether in your research or industrial applications, please cite this repository. A formal BibTeX entry for our forthcoming ArXiv publication will be provided here shortly.

@misc{noether2026,
  author = { Bleeker, Maurits AND Hennerbichler, Markus AND Kuksa, Pavel },
  title = {Noether: A PyTorch-based Framework for Engineering AI},
  year = {2026},
  publisher = {GitHub},
  note = {Equal contribution},
  url = {https://github.com/Emmi-AI/noether}
}

About

Deep-learning framework for Engineering AI. Built on transformer building blocks, it delivers the full engineering stack, allowing teams to build, train, and operate industrial simulation models across engineering verticals.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •