Kerem Kılıç1, Vikram Iyer1, Roham Zendehdel Nobari1, Yagiz Devre2
1ETH Zürich, , 2Princeton
SpotMap is a complete end-to-end pipeline for robotic scene understanding and interactive mapping. It integrates on-board SLAM, volumetric 3D reconstruction, instance segmentation, dynamic scene graph generation, and affordance prediction — giving mobile robots the capability to build detailed maps and understand their surroundings with minimal human supervision.
SpotMap/
├── src/ # Core source code: modules of the pipeline
├── configs/ # Configuration files for SLAM and OpenMask3D
├── data/ # Original and intermediate datasets
├── scene_graph/ # Current scene graph representation
├── docs/ # Documentation, figures, and additional resources
src/
├── data_acquisition/
├── mapping_and_reconstruction/
├── segmentation_and_scene_graph/
└── re_scan/
Each of the core modules in src/ has its own README with all the installation, configuration, and usage instructions:
-
Data Acquisition
Seesrc/data_acquisition/README.mdfor how to extract and preprocess raw RGB-D frames and ROS bag files. -
Mapping & Reconstruction
Seesrc/mapping_and_reconstruction/README.mdfor how to configure and run SLAM, TSDF fusion, denoising, and mesh/point-cloud export. -
Segmentation & Scene Graph
Seesrc/segmentation_and_scene_graph/README.mdfor how to perform semantic segmentation, build the scene graph, and visualize the results.A sample dataset captured by Spot is already available at
src/data_acquisition/ros2/rgbd_dataset. This dataset can be used to test and validate the pipeline components.
If you find this work useful, please cite our paper:
WORK IN PROGRESSIf you find SpotMAP helpful, please ⭐ star our GitHub repository to support the project!
For questions or issues, please open a GitHub Issue
This project is licensed under the MIT License — see LICENSE for details.
