graphdeco-inria / reduced-3dgs

The code for the paper "Reducing the Memory Footprint of 3D Gaussian Splatting"
Other
96 stars 6 forks source link

About the file size? #13

Closed ZhiyeTang closed 1 month ago

ZhiyeTang commented 1 month ago

I run your code on BungeeNeRF dataset, and the obtained PLY file is hundreds of MB. For instance, the file size of point_cloud_quantised_half.ply trained on scene Amsterdam is 367MB, while the size of codebook.pt is 331.86MB. As for other methods, the file size trained on BungeeNeRF should not be 10 times larger than that of MipNeRF360 (as reported in your manuscript, is 29MB). Have you tested your method on BungeeNeRF? Am I making a mistake in calculating the file size? How do you calculate the file sizes reported in your manuscript?

PanagiotisP commented 1 month ago

Hello! Unfortunately, I haven't checked the method with this dataset, but I expect it to work. Have you put the necessary command line arguments? Because running with default arguments runs the baseline (I might need to change that). If that is resolved the file should have the final, reduced size (the ones we report on the paper) Maybe related to this, currently resolved issue https://github.com/graphdeco-inria/reduced-3dgs/issues/10

ZhiyeTang commented 1 month ago

I re-run your command exactly like in the script full_eval.py, and the results are as follows:

scene\file size (MB) high variant low variant full final
amsterdam 134.39 324.14 59.05
bilbao 117.96 296.14 58.09
hollywood 132.29 351.27 71.94
pompidou      
quebec 119.09 311.08 50.40
rome 139.08 362.30 64.57

The file sizes I recorded are about point_cloud_quantised_half.ply, is this the correct way to measure your method?

The command I use is:

# For low variant
for scene in "${scenes[@]}"
do
    python train.py -s /data/share/NVS-Datasets/BungeeNeRF/$scene -m outputs/BungeeNeRF/$scene/low --store_grads --lambda_sh_sparsity 0.01 --store_grads --cull_SH 15000 --std_threshold=0.01 --cdist_threshold=1 --mercy_type=redundancy_opacity_opacity --eval
done

# for full final
for scene in "${scenes[@]}"
do
    python train.py -s /data/share/NVS-Datasets/BungeeNeRF/$scene -m outputs/BungeeNeRF/$scene/final --store_grads --lambda_sh_sparsity=0.1 --store_grads --cull_SH 15000 --std_threshold=0.04 --mercy_points --prune_dead_points --store_grads --lambda_alpha_regul=0.001 --std_threshold=0.04 --cdist_threshold=6 --mercy_type=redundancy_opacity_opacity --eval
done

# for high variant
for scene in "${scenes[@]}"
do
    python train.py -s /data/share/NVS-Datasets/BungeeNeRF/$scene -m outputs/BungeeNeRF/$scene/high --store_grads --lambda_sh_sparsity 0.01 --store_grads --cull_SH 15000 --std_threshold=0.06 --cdist_threshold=8 --mercy_type=redundancy_opacity_opacity --eval
done

Note that the training of scene Pompidou crashed because of CUDA OOM, so I left the cell blank.

PanagiotisP commented 1 month ago

This seems correct. How are the full final results compared to the original? Btw, the codebook.pt is not needed, as its information is stored in the .ply. So yes, at the end you only care about the point_cloud_quantised_half.ply size

ZhiyeTang commented 1 month ago

What do you mean "compared to the original"? The original file size of point_cloud.ply?

PanagiotisP commented 1 month ago

On the one hand, yes, as point_cloud.py doesn't contain the quantisation, but I'm mainly referring to training with the baselines' arguments, that is, original 3DGS. In your first comment you said that Amsterdam scene was 367MB while on the table after that it's 59.05MB. Was that 367MB the quantised baseline (default arguments - original 3DGS)?

ZhiyeTang commented 1 month ago

In the first comment, I run your code without any arguments as simple python train.py, while 367MB refers to the file size of the generated point_cloud_quantised_half.ply. If I understand correctly, it is exactly as you said.