Official Pytorch implementation of HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression.
Yihang Chen, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, Jianfei Cai
[Arxiv
] [Project Page
] [Github
]
π CNC [CVPR'24] is now released for efficient NeRF compression! [Paper
] [Project Page
] [Github
]
Our approach introduces a binary hash grid to establish continuous spatial consistencies, allowing us to unveil the inherent spatial relations of anchors through a carefully designed context model. To facilitate entropy coding, we utilize Gaussian distributions to accurately estimate the probability of each quantized attribute, where an adaptive quantization module is proposed to enable high-precision quantization of these attributes for improved fidelity restoration. Additionally, we incorporate an adaptive masking strategy to eliminate invalid Gaussians and anchors. Importantly, our work is the pioneer to explore context-based compression for 3DGS representation, resulting in a remarkable size reduction.
We tested our code on a server with Ubuntu 20.04.1, cuda 11.8, gcc 9.4.0
cd submodules
unzip diff-gaussian-rasterization.zip
unzip gridencoder.zip
unzip simple-knn.zip
cd ..
conda env create --file environment.yml
conda activate HAC_env
First, create a data/
folder inside the project path by
mkdir data
The data structure will be organised as follows:
data/
βββ dataset_name
βΒ Β βββ scene1/
βΒ Β βΒ Β βββ images
βΒ Β βΒ Β βΒ Β βββ IMG_0.jpg
βΒ Β βΒ Β βΒ Β βββ IMG_1.jpg
βΒ Β βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βββ sparse/
βΒ Β βΒ Β βββ0/
βΒ Β βββ scene2/
βΒ Β βΒ Β βββ images
βΒ Β βΒ Β βΒ Β βββ IMG_0.jpg
βΒ Β βΒ Β βΒ Β βββ IMG_1.jpg
βΒ Β βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βββ sparse/
βΒ Β βΒ Β βββ0/
...
./data/blending/drjohnson/
./data/bungeenerf/amsterdam/
./data/mipnerf360/bicycle/
./data/nerf_synthetic/chair/
./data/tandt/train/
bicycle, bonsai, counter, garden, kitchen, room, stump, flowers, treehill
. data/
folder.For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/
folder.
To train scenes, we provide the following training scripts:
run_shell_tnt.py
run_shell_mip360.py
run_shell_bungee.py
run_shell_db.py
Nerf Synthetic: run_shell_blender.py
run them with
python run_shell_xxx.py
The code will automatically run the entire process of: training, encoding, decoding, testing.
output.log
of the output directory. Results of detailed fidelity, detailed size, detailed time will all be recorded./bitstreams
of the output directory../test/ours_30000/renders
of the output directory.lmbda
in these run_shell_xxx.py
scripts to try variable bitrate.point_cloud.ply
is losslessly compressed as ./bitstreams
. You should refer to ./bitstreams
to get the final model size, but not point_cloud.ply
. You can even delete point_cloud.ply
if you like :).If you find our work helpful, please consider citing:
@inproceedings{hac2024,
title={HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression},
author={Chen, Yihang and Wu, Qianyi and Lin, Weiyao and Harandi, Mehrtash and Cai, Jianfei},
booktitle={European Conference on Computer Vision},
year={2024}
}
Please follow the LICENSE of 3D-GS.