Yiwei Xu, Yifei Yu, Wentian Gan, Tengfei Wang, Zongqian Zhan, Hao Cheng and Xin Wang
# SSH
git clone git@github.com:xywjohn/GS_On-The-Fly.git --recursiveor
# HTTPS
git clone https://github.com/xywjohn/GS_On-The-Fly --recursiveAfter that, please manually decompress the diff-gaussian-rasterization.zip archive located under the submodules directory.
conda env create --file environment.yml
conda activate On_The_Fly_GS
cd .../GS_On-The-Fly
pip install pytorch3d
pip install open3dSince this project aims to enable simultaneous image acquisition and 3D Gaussian Splatting (3DGS) training, we need to utilize the On-The-Fly SfM system proposed by Zhan et al. This system has already achieved the capability of near real-time image acquisition and camera pose estimation. In this project, we will leverage the camera poses and sparse point clouds provided by this system as the input for subsequent 3DGS training.
You can use your own data or the test data provided by us (demo_data/data1 and On-The-Fly/data1) to perform processing with the On-The-Fly SfM system. You can download our data here and place all the data into the folder with the same name in this project directory. Next, please configure the parameters DatasetName, SourceImageDataset, and On_The_Fly_Dataset in the DatasetPrepare.py to the corresponding dataset names or paths. For example, if you intend to use the dataset provided by us directly, please set them as follows:
DatasetName = ['data1']
SourceImageDataset = r".../demo_data"
On_The_Fly_Dataset = r".../On-The-Fly" # where you save On-The-Fly SfM ResultsAnd then you have to run DatasetPrepare.py:
python DatasetPrepare.pyThis will produce results as illustrated below:
.../On-The-Fly/data1
=>16
||===>images
||===||===>10002.jpg
||===||===>10005.jpg
||===||===>......
||===>sparse
||===||===>0
||=========||===>cameras.bin
||=========||===>imageMatchMatrix.txt
||=========||===>images.bin
||=========||===>imagesNames.txt
||=========||===>points3D.bin
=>17
||===>images
||===||===>10002.jpg
||===||===>10005.jpg
||===||===>......
||===>sparse
||===||===>0
||=========||===>cameras.bin
||=========||===>imageMatchMatrix.txt
||=========||===>images.bin
||=========||===>imagesNames.txt
||=========||===>points3D.bin
=>18
|| ......
=>19
|| ......
=>20
|| ......
=>21
|| ......
......
Source_Path_Dir should specify the directory that contains all image data, camera pose information, and sparse point cloud. Model_Path_Dir should specify the output directory for the 3DGS results. The output frequency of 3DGS can be configured by using different command-line arguments.
python ContinuosProgressiveTrain.py --Source_Path_Dir .../On-The-Fly/data1 --Model_Path_Dir .../OutputDir --FinalOptimizationIterations 4000Command Line Arguments for ContinuosProgressiveTrain.py
Path to the source directory containing all image data, camera pose information, and sparse point cloud.
Path where the trained model should be stored.
Training iterations for initial training phase.
Training iterations for progressive training phase. This parameter is used to control the number of 3DGS training iterations to be performed each time a new image is acquired.
This parameter is used to control how often Gaussian densification is performed during the progressive training phase, based on the number of 3DGS training iterations.
Training iterations for final refinement phase.
Save gaussians before a new image is acquired.
@misc{xu2025gaussianontheflysplattingprogressive,
title={Gaussian On-the-Fly Splatting: A Progressive Framework for Robust Near Real-Time 3DGS Optimization},
author={Yiwei Xu and Yifei Yu and Wentian Gan and Tengfei Wang and Zongqian Zhan and Hao Cheng and Xin Wang},
year={2025},
eprint={2503.13086},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.13086},
}
