Cc-Hy / CMKD

Cross-Modality Knowledge Distillation Network for Monocular 3D Object Detection (ECCV 2022 Oral)
Apache License 2.0
107 stars 9 forks source link

about training V2.yaml #15

Open ksh11023 opened 1 year ago

ksh11023 commented 1 year ago

Hello,

When training with V2.yaml file, in train_cmkd.py why do you load pretrained_lidar_model to model.model_img?

11

Thank you.

Cc-Hy commented 1 year ago

Hi, In the student model and the teacher model, there are several shared structures like BEV backbone and detection head. We load the pre-trained weights from the teacher model to accerate convergence and potentially improve the performance of the student model.

image

ksh11023 commented 1 year ago

Thank you for the swift reply!

I have one more question. When training 60 epochs(BEV DA), is it better to train without rpn loss? the code only train with depth loss and BEV loss.

Thank you.

Cc-Hy commented 1 year ago

Hi, Yes, this can be considered as a trick in the training process, because when the BEV features are not correct, calculating RPN losses is meaningless. And similar technique has been used in other methods, for example, using only 2D detection losses first and then adding 3D detection losses.