Open Han230104 opened 6 months ago
This is a great boost!! Before moving grid_x and grid_y to GPU, the average training time on my RTX3090 GPU is about 50 min.
I implemented an unofficial version of 2DGS before.
I made some modifications : the way to get the gradient of the projected 2D center , adjust parameters like prune_interval(1500), opacity_cull(0.2)
The average training time now is about 20 min while maintaining the accuracy.
If you are interested, please check it out. Thank you !
Great work! But It seems like that the link is broken and I cannot access. I am highly appreciated if you would be able to PR or outline the changes. I will perform testing on it then. Thank you for your great contribution!
Great. I found the correct link in your github profile. I will check it later.
I implemented my version referencing your python demo, so our cuda implementation is actually quite different. I made some modifications on the official version based on my experience but it didn't improve the speed like I thought. I may need more time to check the differences.
Excellent Work! I'm evaluating the MipNeRF360 dataset on one RTX4090 GPU. The average training time on the 9 scenes is 29 min (Already move grid_x grid_y to GPU) I wonder if this training time is normal or not since you didn't report the training time on your paper. Thank you!