Official Implementation of RTGS: Enabling Real-Time Gaussian Splatting on Mobile Devices Using Efficiency-Guided Pruning and Foveated Rendering. [Paper]
Clone the repo
git clone https://github.com/horizon-research/FoV-3DGS.git
Prepare Dataset
Prepare Dense 3DGS for pruning
|-- cameras.json
|-- cfg_args
|-- chkpnt30000.pth
|-- input.ply
`-- point_cloud
`-- iteration_30000
`-- point_cloud.ply
Prepare environment:
# pull the docker (this is for x86 machine, for jetson you will need other prbuilt, see https://github.com/dusty-nv/jetson-containers/tree/master and find one that suitable for tour jetpack.)
docker pull pytorch/pytorch:2.3.0-cuda11.8-cudnn8-devel
# run docker
bash ./run_docker.sh
# go in to docker and install all submodules
bash update_submodules.sh
pip install plyfile opencv-python matplotlib icecream
apt-get update
apt-get install libgl1-mesa-glx libglib2.0-0 -y
# we only leave bicycle, uncomment other scenes for batch test
python3 combined_training_script.py
The result will be stored in the scene folder.
python3 quality_eval.py
The result will be in ./full_eval_results/ours-Q
bash batch_ours_fps.sh
SM (Shared Model) FR: Generate + Measure the Layer-Wise Quality & FPS
# we only leave bicycle, uncomment other scenes for batch test
bash batch_gen_naive_FR.sh # generate SMFR
python3 quality_eval_layers_naive.py # measure qulaity in each layer, result will be in ./layers_eval_results/naiveFR
bash batch_naive_fps.sh #measure fps, result will be in ./fps
MM (Multi-Model) FR : Generate + Measure the Layer-Wise Quality & FPS this one need LightGS for Pruning Multiple Models, we already include it in our repo
bash batch_pnum_analyzer.sh # analyze pnum of our model in each layer
cd ../LightGaussian
bash ./batch_gen_mmFR.sh # the result will be in ./MMFR/ours-Q
cd ../fov3dgs
python3 quality_eval_layers_mmfr.py # measure qulaity in each layer, result will be in ./layers_eval_results/MMFR
bash ./batch_mmfr_fps.sh #measure fps, result will be in ./fps