Zoubin Bi · Yixin Zeng · Chong Zeng · Fan Pei · Xiang Feng · Kun Zhou · Hongzhi Wu
Project Page
|
Paper
|
arXiv
|
Data
We present a spatial and angular Gaussian based representation and a triple splatting process, for real-time, high-quality novel lighting-and-view synthesis from multi-view point-lit input images.
conda create --name gs3 python=3.10 pytorch==2.4.1 torchvision==0.19.1 pytorch-cuda=12.4 cuda-toolkit=12.4 cuda-cudart=12.4 -c pytorch -c "nvidia/label/cuda-12.4.0"
conda activate gs3
pip install ninja # speedup torch cuda extensions compilation
pip install -r requirements.txt
We also provide a docker container.
You can use our pre-built docker image.
docker run -it --gpus all --rm iamncj/gs3:241002
Or you can build your own docker image.
docker build -t gs3:latest .
We provide a few sample scripts from our paper.
For real captured scenes, please use --cam_opt
and --pl_opt
to enable camera pose and light optimization.
bash real_train.sh # real captured scenes
bash syn_train.sh # synthetic scenes
For real captured scenes, we provide --valid
and corresponding .json
file to render a circle view. If you are going to run the test set of real captured scenes, please remember to add --opt_pose
to use the calibrated poses.
bash real_render.sh # real captured scenes
bash syn_render.sh # synthetic scenes
We release our data and pretrained models at huggingface.
Note: For blender scene rendering, we use the script from NRHints. For pre-captured scene rendering, we use Asuna.
You can download the data by running the following command:
pip install huggingface_hub[hf_transfer]
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download --repo-type dataset gsrelight/gsrelight-data --local-dir /path/to/data
Cite as below if you find this repository is helpful to your project:
@inproceedings{bi2024rgs,
title = {GS\textsuperscript{3}: Efficient Relighting with Triple Gaussian Splatting},
author = {Zoubin Bi and Yixin Zeng and Chong Zeng and Fan Pei and Xiang Feng and Kun Zhou and Hongzhi Wu},
booktitle = {SIGGRAPH Asia 2024 Conference Papers},
year = {2024}
}
Some of our dataset are borrowed from NRHints. Please also cite NRHints if you use those data.
We have intensively borrow codes from gaussian splatting and gsplat. We also use tiny-cuda-nn for it's efficient MLP implementation. Many thanks to the authors for sharing their codes.