graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
Other
13.84k stars 1.79k forks source link

Major visual quality change in newer code update? #49

Closed henrypearce4D closed 1 year ago

henrypearce4D commented 1 year ago

Hi, I went through the setup of installing the code on new machine yesterday and have noticed the results now have a major drop in visual quality.

The test was the same dataset trained with the same settings, viewing the 7000 iteration. My dataset is 59 source images at 3008 x 4112 and ran with the command -r 4

I tested copying my original gaussian-splatting project folder over to the new PC and ran the same training and the result was the same good quality as the original.

So my questions are;

Snosixtyboo commented 1 year ago

Hi,

thanks for bringing this up. You simply copied the folder, trained again and the result was good? You did not reinstall the submodules (diff-gaussian-rasterization) after copying the folder? That would be good, that would indicate the change comes only from the Python code. In that case you could just provide us with the commit hash of the version that still works well for you (the first line you get when you run git log) and we will check what happened since then in Python. It is very possible that this comes as a result of us ensuring that the intended learning rates adapt with the scene extent. In this case it would come down to manually selecting finer learning rates to get the quality back that you had.

henrypearce4D commented 1 year ago

Hi, yes for testing the original project folder on the new PC I only copied the folder and did not reinstall submodules. Here is the top lines from git log from the original "looking good" version;

commit 737f75406999ff1bc2196bd107bc89caf09afe88 (HEAD -> main, origin/main, origin/HEAD) Author: bkerbl bkerbl@ad.inria.fr Date: Fri Jul 14 16:18:22 2023 +0200

Pushed diff-rasterizer
Snosixtyboo commented 1 year ago

Hi, yes for testing the original project folder on the new PC I only copied the folder and did not reinstall submodules. Here is the top lines from git log from the original "looking good" version;

commit 737f754 (HEAD -> main, origin/main, origin/HEAD) Author: bkerbl bkerbl@ad.inria.fr Date: Fri Jul 14 16:18:22 2023 +0200

Pushed diff-rasterizer

Thanks a lot. It seems like the only real difference would be what I mentioned above. Rather than sharing data sets, I think it would be faster if you could try the following:

in scene/gaussian_model.py (broken or good version, doesn't matter), in function create_from_pcd, can you add the line print("DEBUG MESSAGE", spatial_lr_scale) and tell us what this prints when you run your dataset?

Thanks and all the best, Bernhard

henrypearce4D commented 1 year ago

Hi, this is the message from the original "good" version DEBUG MESSAGE 90.02682418823242 [21/07 13:02:38]

and I just check the latest "not good" version

DEBUG MESSAGE 90.02682418823242 [21/07 13:07:59]

Snosixtyboo commented 1 year ago

Thanks! If my math is right, then you should get the results you had initially (or very close) by using:

--position_lr_init 0.000008 --position_lr_final 0.00000008

Let me know!

This is using a lower learning rate than what we consider a good default. It's possible that for your setup it works better with this, e.g., if your initial points are already dense, but there is also a decent chance that the final, 30k result actually becomes better with the new version, even though 7k looks worse, since the method is free to move more early in the optimization.

henrypearce4D commented 1 year ago

Ok thanks, that does make a difference and is very close now but still less detailed at different depths on the subject.

Is there a way to workout the best options for the data set rather than trial and error?

Snosixtyboo commented 1 year ago

Hi, if you use the exact values below --position_lr_init 0.0000088 --position_lr_final 0.000000088 there should not be any difference that is not due to randomness. If you find something is repeatably off, it could warrant further investigation.

Unfortunately, we don't have a great solution for finding the ideal parameters for arbitrary data sets right now. Note that this is true for any reconstruction method out there: you probably don't get the best possible quality for your data set with the default setting. We found our defaults to work well for COLMAP data sets of single objects from SfM points. Denser points or scenes with many objects will benefit from changing the parameters. However, once found, the parameters should be reusable across the same setups. If you have good ones for your config, they should work fine regardless of the subject.

Hth, Bernhard