Closed wwwjn closed 5 years ago
Fluctuating is common. I haven’t tested on Ti. Maybe at least 12GB GPU men is required, which is our test env
Thank you so much!!
Hi, many thanks for your reply. I go into details of BilateralNN.py and want to find why the GPU memory fluctuating, but I'm not very clear about that. Do you know that why the GPU memory fluctuating so fast? (When I was training on KITTI, the GPU memory change very frequently, for example change every second.) And when will the GPU memory usage reach the top? Thanks a lot!!
Because we use sparse CNNs. So the input has various length -- depending on the actual volume the point clouds occupy -- as stated in our paper.
I am training on a Titan RTX GPU with 24GB memory and I have the same issue. My training suddenly stopped with a memory error. If you say that at least 12GB is needed, then I do not know why I have a memory error.
In our setting, training on flying things 3D with 8192 sampled points per frame works well and fit 12 GB memory.
If you are using a different training set. Then my suggestion for possible solutions are 1) adjust the scaling factors or 2) remove the data points occupy a large volume if there are only a few of them. 3) Splitting the whole point cloud into chunks and training each chunk at a time will also work.
Hi @laoreja , when i was training your model on my own GPU, the usage of GPU memory is always fluctuating. Some times it's even ‘out of memory’. I was training on dataset KITTI, using your code and my GPU is 1080Ti. Is that common? Or I made some mistake when I try to train on KITTI? Thank you so much!