I am currently adapting your code to my own project, but I found the GPU memory occupation is exceedingly large. I tested the memory usage of lovasz softmax, which showed it uses nearly 2000Mb with batch size 1. I don't think it is normal. Have you checked the reasons behind the weirdly large GPU memory usage? Could you point out which part of the network that occupies memory most besides lovasz softmax?
I think GPU memory consumption mostly comes from two places:
the raw point cloud input. We use the unprojected point cloud as the input. In the SemanticKITTI case, it has more than 100000 points.
the grid size. The output size of our model is like (BxCxHxWxZ) which can be huge especially after the backward for our default setting --grid_size 480 360 32. Changing it to --grid_size 320 240 32 or even smaller will help a lot if GPU memory is the problem.
I am currently adapting your code to my own project, but I found the GPU memory occupation is exceedingly large. I tested the memory usage of lovasz softmax, which showed it uses nearly 2000Mb with batch size 1. I don't think it is normal. Have you checked the reasons behind the weirdly large GPU memory usage? Could you point out which part of the network that occupies memory most besides lovasz softmax?