Open Manuel-Z opened 3 weeks ago
Could you provide more training details? such as what dataset was used
Thanks for your reply. We just followed the config used in the default config as monocd.yaml with KITTI dataset, and the training processes normally. log_09-25 11:55:03.txt
[2024-09-25 11:55:07,050] monocd.trainer INFO: Start training [2024-09-25 11:55:19,212] monocd.trainer INFO: eta: 15:38:43 iter: 10 loss: 73.0622 2D_IoU: 0.0754 3D_IoU: 0.0000 depth_loss: 20.5264 compensated_depth_loss: 1.5848 keypoint_depth_loss: 12.6731 hm_loss: 3.5794 bbox_loss: 0.9254 dims_loss: 3.2605 orien_loss: 1.8150 horizon_hm_loss: 3.7656 offset_loss: 0.4400 trunc_offset_loss: 0.0000 corner_loss: 6.0932 keypoint_loss: 13.9379 weighted_avg_depth_loss: 4.4608 depth_MAE: 0.8294 comp_cen_MAE: 0.5753 comp_02_MAE: 0.6096 comp_13_MAE: 0.6100 center_MAE: 3.1103 02_MAE: 3.4582 13_MAE: 3.3069 lower_MAE: 0.3881 hard_MAE: 0.7067 soft_MAE: 1.1548 time: 1.2141 data: 0.1173 lr: 0.00030000
and finally: [2024-09-25 23:07:45,613] monocd.inference INFO: Car AP@0.70, 0.70, 0.70: bbox AP:96.2938, 87.5118, 80.2193 bev AP:26.4258, 20.3711, 18.2670 3d AP:17.2279, 13.3145, 11.0903 aos AP:96.09, 86.88, 79.13 Car AP@0.70, 0.50, 0.50: bbox AP:96.2938, 87.5118, 80.2193 bev AP:63.8181, 48.7316, 43.0057 3d AP:57.2157, 43.2018, 37.6626 aos AP:96.09, 86.88, 79.13
Thanks for the great work! It's useful for our work, but we had a problem during the training.
The 3D_IoU is always 0.00, leading to a lower performance than paper.
The training process just follows the code here, with CUDA=11.3 and torch==1.12.0, and we found this may caused by the Polygon function in iou_loss.py.
Have you got any idea about this problem, we look forward to your reply, thank you!