autonomousvision / mip-splatting

[CVPR'24 Best Student Paper] Mip-Splatting: Alias-free 3D Gaussian Splatting
https://niujinshuchong.github.io/mip-splatting/
Other
1.01k stars 65 forks source link

GPU Memory - Massive Issue #38

Open Fanie-Visagie opened 2 months ago

Fanie-Visagie commented 2 months ago

Work is needed on the GOU memory allocation. Cant get any results as compared to original paper as GPU memory max's out way to quickly.

Suggest to have another go basis using latest 3dgs implementation with your code revision....

Hope this helps...

It looks good, but cant use this at all or even see results...

Aur1anna commented 2 months ago

Hi. It looks like I am facing the same issue as you. However, I tried to make some changes, not sure if it would help you.  I am using one 4090, and when I was training the bicycle scene, it ran OOM after over 5000 iterations. I checked a previous issue where the author mentioned that the parameter on line 163 of train.py could be increased from 0.005 to 0.05 or 0.5. I changed it to 0.05. However, it still resulted in OOM. This time, I chose to Ctrl+C when the OOM occurred during the bicycle training process, which terminated the bicycle training and continued to the next scene of 360_v2. Fortunately, I successfully trained four scenes of the dataset: bobsai, counter, kitchen, and room. The other scenes still resulted in OOM.  By the way, I am not sure if the parameter modification had any effect. Maybe you can try according to your situation.

niujinshuchong commented 2 months ago

Hi, 4090 has 24GB GPU memory and it should be enough to run mip-nerf 360 dataset. @Fanie-Visagie are you using different dataset? Or you could change -r {factor} to -i image_{factor} here https://github.com/autonomousvision/mip-splatting/blob/main/scripts/run_mipnerf360.py#L21 so it won't load the original high-res images.

Aur1anna commented 2 months ago

@niujinshuchong Hi, thank you so much for your help. When I train 360_v2, I use "python scripts/run_mipnerf360.py ",should I try others? Or should I change the parameter 0.05 to 0.5 or back to 0.005 on line 163 of train.py ? And btw, I also want to try some my own datasets, and It looks like some code needs to be modified. Can you give me some advice?

niujinshuchong commented 2 months ago

Yes, of course you can still change the parameter to use less gaussians. If you process your data with colmap, you can train it directly.

Fanie-Visagie commented 2 months ago

yeah not quite...I am using 200 photos of my own dataset trained through colmap. every other library does this in 30 min. Yours spikes (even with allocating imagery to the cpu) between 2000 and 3000 iterations. Then the GPU maxes out ??

I to am running a 4090 with 24gb of ram...just using the bicycle is not testing, you need to please test other datasets as well if this library is to become common places for users, instead of 3DGS...

Fanie-Visagie commented 2 months ago

Using 50 images... image

Fanie-Visagie commented 2 months ago

@Aur1anna appreciate if we can catchup to swap notes. do you have weChat ??

Aur1anna commented 2 months ago

@Aur1anna appreciate if we can catchup to swap notes. do you have weChat ?? 如果我们能赶上交换笔记,我们将不胜感激。你有微信吗 ??

yeah,maybe you can add me with gqs3290024845.