This repo contains two upstream-derived codebases with different dependency stacks. Keep them in separate environments:
- Codebase:
mmdetection3d/(MMDet3D). Conda env name:mm3d.- Runs PV-RCNN baseline + RefineMoE variants for KITTI in this repo.
- Codebase:
FSHNet/(OpenPCDet-style). Conda env name:fshnet.- Runs VoxelRCNN baseline for KITTI and FSHNet baselines (primarily Waymo).
Principles:
- Use conda to manage
python(and isolate the environment). - Install PyTorch (+ CUDA wheel) manually for your machine.
- Use
uvto install Python package requirements.
- conda (Miniconda/Mambaforge)
- CUDA toolkit/driver matching your chosen PyTorch wheels (if using GPU)
Create the environment:
conda create -n mm3d python=3.8
conda activate mm3d
python -m pip install -U pip uvInstall PyTorch (pick versions that match your CUDA/driver):
# Example (update as needed):
python -m pip install \
torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 \
-f https://download.pytorch.org/whl/torch_stable.htmlInstall OpenMMLab stack (as used by this repo):
python -m pip install -U openmim 'numpy==1.23.0'
mim install mmengine
mim install 'mmcv==2.0.0rc4'
mim install 'mmdet==3.0.0'Install this repo's mmdetection3d/ codebase:
# Install test deps (includes pytest/yapf/isort/flake8 via requirements)
uv pip install -r mmdetection3d/requirements/tests.txt
# Editable install (use mim so that `mim download/search` can find model-index metadata)
cd mmdetection3d
mim install -e . --no-build-isolationQuick sanity check:
pytest mmdetection3d/tests/test_models/test_voxel_encoders/test_pillar_encoder.pyNotes:
- Optional performance deps (e.g.
cumm,spconv) depend on CUDA; install them only if your experiment/config requires them.
Create the environment:
conda create -n fshnet python=3.8
conda activate fshnet
python -m pip install -U pip uvInstall PyTorch (pick versions that match your CUDA/driver):
# Example (update as needed; repo authors tested torch>=1.10 for this subtree):
python -m pip install \
torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 \
-f https://download.pytorch.org/whl/torch_stable.htmlInstall FSHNet/ dependencies and build extensions (tested recipe):
cd FSHNet
# CUDA extensions (pick the CUDA-tagged wheels that match your system)
uv pip install \
cumm-cu113 \
spconv-cu113 \
scikit-image \
pyyaml \
tensorboardX \
easydict \
pyquaternion \
opencv-python \
protobuf==3.20.3 \
triton==2.1.0 \
numba==0.48.0
# Build and install (editable)
python setup.py develop
# Waymo extras (only needed for Waymo experiments)
python -m pip uninstall -y waymo-open-dataset-tf-2-4-0
uv pip install waymo-open-dataset-tf-2-11-0
# SharedArray: install with plain pip (uv install may fail)
python -m pip install SharedArray==3.1.0
# Install torch-scatter (will resolve a compatible build for your torch/cuda)
uv pip install torch-scatterQuick sanity check:
python -c "import pcdet; print('pcdet ok')"Waymo-specific extras:
- Waymo and related packages are only needed for Waymo experiments.
- Prefer documenting these as an opt-in block in your experiment notes, since they are platform- and CUDA-sensitive.
Optional dependencies:
navsimis optional. If you neednavsim==2.0.0, it is not available on PyPI and must be installed from source.
Note:
FSHNet/does not ship a solver-friendlyrequirements.txtin this repo. Use the recipe above (it matches the original working environment).