Closed pablodawson closed 10 months ago
Hi, this is actually quite normal.
The memory overhead of VRAM mainly consists of several parts:
In my experiments, the VRAM overhead of the query MLP is not significant. The increase in VRAM usage is because before 15k, the 3D Gaussian in the canonical space is continuously densifying.
Hey,
Even after your latest commit, I see the GPU memory going up as training progresses, and getting slower:
On a 4090 GPU (24gb VRAM), for example: Iter 5000: 15% GPU memory usage, ETA: 36:28 Iter 14000: 65% GPU memory usage, ETA: 1:20:10
Is this normal? Maybe another detach() missing somewhere?
Thanks!