Closed zehuichen123 closed 2 years ago
We have not supported mixed precision training for CenterPoint, so you can only refer to the implementation of PointPillars and SECOND to adjust some details of CenterPoint for now.
@Tai-Wang Hi, I've rechecked the code in mmdet3d and the SECOND doesn't support fp16, although there does exist an example config (https://github.com/open-mmlab/mmdetection3d/blob/master/configs/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py). I think it's a bug here.
I found the direct reason may related to the self.weight
of the SparseConvolution
(ops/spconv/conv.py). It won't be converted to fp16 during mixed-precision training. But currently i have no idea how to fix it :)
I will create a new issue to report this problem with detailed experimental settings.
Hi, I am trying to use fp16 with CenterPoint but end up with the same bug :( The environment is mmdet3d 0.16.0, cuda10.1, torch1.6, V100. I think the weights of sparse conv are not converted into fp16 since the code run into the first if clause in line 119
mmdet3d/ops/spconv/ops.py
:ps. I directly add fp16 = dict(loss_scale=512.) in CenterPoint default config to enable float16 training.