Closed sky-fly97 closed 1 year ago
Hi,
The spatial voxel pruning is borrowed from this paper. https://arxiv.org/abs/2209.14201
It saves about 50% FLOPs. But the actual speed-up based on the current implementation is less than 10%. Thus, I would like to use the default backbone network to keep the code clean. You can also try to use VoxelResBackBone8xVoxelNeXtSPS
if you like. It leads to almost no performance drop.
Regards, Yukang Chen
Thanks a lot, also is this the case with SparseMaxPool, the paper shows that using SparseMaxPool is better than using nms, but the cfg that reports performance here seems to be all nms_gpu.
Hi,
SparseMaxPool indeed can saves computational cost. You need to install this from source code if you want to use it.
I use nms_gpu in this repo by default because I know that the source-code installation of spconv is a tired job. The config and files of max-pooing version are all provided in this project.
Regards, Yukang Chen
Hi,
I will close this issue. Please feel free to contact me or open other issues, if there are any other problems.
Regards, Yukang Chen
Hello, I noticed that on all datasets you are not using spatial voxel pruning, i.e. VoxelResBackBone8xVoxelNeXt2DSPS or VoxelResBackBone8xVoxelNeXtSPS, so this is just a trick for weighing performance against FLOPS?