yzslab / gaussian-splatting-lightning

A 3D Gaussian Splatting framework with various derived algorithms and an interactive web viewer
Other
457 stars 38 forks source link

[Question] Is the densification/pruning logic different from 3DGS vanilla? #29

Closed seigeweapon closed 5 months ago

seigeweapon commented 5 months ago

hi, by run gaussian-splatting-lightning and 3DGS vanilla code on the same dataset, I got gs models with very different number of points.

The init point cloud has 2.6M points, it is from a colmap dense reconstruction. After running same number of iterations (10000) on both side. The output of gaussian-splatting-lightning has only 0.37M points, while the original 3DGS has 2.0M points.

I see no difference on the densification duration or other related parameters. And the densification-pruning code seems to be the same. So why the difference?

the command on gaussian-splatting-lightning: python main.py fit --data.path ../../colmap/stegosaurus_colmap/dense/ -n stego_dense_10000 --max_steps 10000

and the command on 3DGS: python train.py -s ../../colmap/stegosaurus_colmap/dense/ -r 1 --iterations 10000

yzslab commented 5 months ago

Theoretically this should not be happened with default hyperparameters. What is the resolution of your images? Did they resize to 1,600 by vanilla 3DGS? If resizing happened, try to start vanilla 3DGS training with option -r 1: python train.py -r 1 ....

seigeweapon commented 5 months ago

updated the post.

the resolution is 3996x2992. I used "-r 1" to make sure no downsampling.

yzslab commented 5 months ago

updated the post.

the resolution is 3996x2992. I used "-r 1" to make sure no downsampling.

Did they load the same point cloud to do initialization? Check whether both the point number of the input.ply file located in the training output directories are identical. Because there are some differences in the point cloud loader.

It would be better if you can send me your dataset, because I can not reproduce this problem. Send to this address if you do not want show it to others: HIDDEN

seigeweapon commented 5 months ago

Dataset sent to you.

Yes, the input.ply are identical. 2.6M points

seigeweapon commented 5 months ago

btw, I made some changes to load from ply. I put it here just in case to help you reproduce it. ply-diff.txt

yzslab commented 5 months ago

Dataset sent to you.

Yes, the input.ply are identical. 2.6M points

Received.

Vanilla 3DGS will load points3D.ply if exists: https://github.com/graphdeco-inria/gaussian-splatting/blob/472689c0dc70417448fb451bf529ae532d32c095/scene/dataset_readers.py#L157-L170

While this repo only load points3D.bin. Your points3D.ply is generated from dense reconstruction, not poins3D.bin from sparse reconstruction. So this is the reason why you get the different result.

seigeweapon commented 5 months ago

I noticed this. so I made some small changes to enable loading point3D.ply with your code.

yzslab commented 5 months ago

I noticed this. so I made some small changes to enable loading point3D.ply with your code.

OK, I will test it later.

yzslab commented 5 months ago

btw, I made some changes to load from ply. I put it here just in case to help you reproduce it. ply-diff.txt

Looks like the RGB values from ply file are normalized twice. The first time is in your fetchPly(), the second at https://github.com/yzslab/gaussian-splatting-lightning/blob/f577373d301874f9a4cfce6ab24ff85882e5b5f5/internal/dataset.py#L342

I will test whether the result related to this.

yzslab commented 5 months ago

image

It indeed related to dividing by 255 twice. Remove the / 255.0 operation from your fetchPly(), then you should be able to get your expected result.

seigeweapon commented 5 months ago

You nailed it. Thank you so much.