Open hot-dog opened 4 months ago
Hi, what gpu and which dataset are you using? Can you provide a comparison of training speed of mip-splatting or 3DGS?
Thanks for your reply. I am using NVIDIA RTX A6000 GPU and custom dataset, my dataset's image size is a little big(4372x2916), i think this is the one reason why the training is so slow. I have also tried mip-nerf 360 dataset and train with the script you provide, the training speed is fast at the begining but slow down to about 1.x iter/s when the training goes on, could you please explain the reason? And i have another question, my dataset has been cropped and the principal point does not locate on the center of image, i have modified the projection matrix accordingly and these modifications works fine with 3DGS, but do not work with GOF, the training loss do not converge, any advises? Thank you:)
I also encountered the same problem
I also encountered the same problem
Did you also encountered the training loss not converging problem?
Yes, it is declining before 15,000 times, but it does not converge after 15,000 times. I have trained like this many times.
Hi, using high resolution images for training will be slow.
As the training progresses, more Gaussians will be allocated so it becomes slower. But 1.x iters/s looks very strange.
Our current implementation expects principal point to be the center of images and you can crop your image to make it centerized.
@hot-dog @leomessi999 Could you provide more details of the convergence issues?
I am using an A6000 GPU and my training is also very slow. Maybe it's a GPU specific issue?
I am using an 4090 GPU and my training is also very slow. Is there a problem with the code?
@guwinston Hi, which dataset are you using? Can you check how many Gaussians are used during training? It will be slow if there are too many Gaussians.
@niujinshuchong I am using mipnerf360's bicycle scene, the image factor is 4,and I printed the number of Gaussian points in the training log. It seems that there are too many point clouds. Do you have any solutions
@guwinston @niujinshuchong +1, seeing same behaviour on nerf-360 dataset with RTX 4090. I assumed it was memory related because it seems to be fine until it reaches full memory allocation then it drops off.
@Loppas How long dose it take for you to train on the bicycle scene with RTX 4090?
The training speed is about several seconds per iteration, what could be the possible reason?