laoreja / HPLFlowNet

Code for our CVPR 2019 paper, HPLFlowNet: Hierarchical Permutohedral Lattice FlowNet for Scene Flow Estimation on Large-scale Point Clouds.
GNU General Public License v3.0
101 stars 9 forks source link

GPU memory usage fluctuates. #7

Closed wwwjn closed 5 years ago

wwwjn commented 5 years ago

Hi @laoreja , when i was training your model on my own GPU, the usage of GPU memory is always fluctuating. Some times it's even ‘out of memory’. I was training on dataset KITTI, using your code and my GPU is 1080Ti. Is that common? Or I made some mistake when I try to train on KITTI? Thank you so much!

laoreja commented 5 years ago

Fluctuating is common. I haven’t tested on Ti. Maybe at least 12GB GPU men is required, which is our test env

wwwjn commented 5 years ago

Thank you so much!!

wwwjn commented 5 years ago

Hi, many thanks for your reply. I go into details of BilateralNN.py and want to find why the GPU memory fluctuating, but I'm not very clear about that. Do you know that why the GPU memory fluctuating so fast? (When I was training on KITTI, the GPU memory change very frequently, for example change every second.) And when will the GPU memory usage reach the top? Thanks a lot!!

laoreja commented 5 years ago

Because we use sparse CNNs. So the input has various length -- depending on the actual volume the point clouds occupy -- as stated in our paper.

munib94 commented 4 years ago

I am training on a Titan RTX GPU with 24GB memory and I have the same issue. My training suddenly stopped with a memory error. If you say that at least 12GB is needed, then I do not know why I have a memory error.

laoreja commented 3 years ago

In our setting, training on flying things 3D with 8192 sampled points per frame works well and fit 12 GB memory.

If you are using a different training set. Then my suggestion for possible solutions are 1) adjust the scaling factors or 2) remove the data points occupy a large volume if there are only a few of them. 3) Splitting the whole point cloud into chunks and training each chunk at a time will also work.