Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving / YOLOStereo3D: A Step Back to 2D for Efficient Stereo 3D Detection
I have no idea why the disparity has to be divided by 16. After division, is the disparity map correspondent to the original input size or 1/16 input size?
https://github.com/Owen-Liuyuxuan/visualDet3D/blob/master/visualDet3D/data/kitti/dataset/stereo_dataset.py#L121
I have no idea why the disparity has to be divided by 16. After division, is the disparity map correspondent to the original input size or 1/16 input size?