[PyTorch] Official implementation of CVPR2022 paper "TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers". https://arxiv.org/abs/2203.11496
HI, I am doing same research on mono 3d object detection, and i wonder why do you only resize image and 2d bbox only, and do not resize gt_bboxes_3d ? For the regression procedure is regressing direct to gt_bboxes_3d not gt_bboxes(2d box annotations).
i follow mmdetection official "Resize" augmentation to get error result, as figure below.
https://github.com/XuyangBai/TransFusion/blob/73c596f7bd3460c17cbcc58dd9bcc5a0896774a8/mmdet3d/datasets/pipelines/loading.py#L223
HI, I am doing same research on mono 3d object detection, and i wonder why do you only resize image and 2d bbox only, and do not resize gt_bboxes_3d ? For the regression procedure is regressing direct to gt_bboxes_3d not gt_bboxes(2d box annotations). i follow mmdetection official "Resize" augmentation to get error result, as figure below.
I am looking forward to a reply;