abhi1kumar / DEVIANT

[ECCV 2022] Official PyTorch Code of DEVIANT: Depth Equivariant Network for Monocular 3D Object Detection
https://arxiv.org/abs/2207.10758
MIT License
203 stars 29 forks source link

There may be some errors in class #23

Closed xudh1991 closed 10 months ago

xudh1991 commented 10 months ago

rect_to_img In your project, I used my own training data to train the model and found that some targets could never be trained. Therefore, I searched for the problem and found the location on the image. pts_rect is pts_rect P2 is P2 Obtain results results But my image size is 2560*1150, The projected 3D coordinates exceeded the boundary, causing the target in the image to be unable to enter training. According to coordinate transformation rules 微信截图_20231226093442 I think this position should be

pts_img = (pts_2d_hom[:, 0:2].T / pts_2d_hom[:, 2]).T

rather than

pts_img = (pts_2d_hom[:, 0:2].T / pts_rect_hom[:, 2]).T

However, after changing this position, I still haven't achieved ideal results. May I ask if my thinking is correct?? If there is an error, please point it out. If it is correct, are there any other relevant positions that need to be changed? Why did I not achieve the desired result Sincerely in need of help, thank you very much

abhi1kumar commented 10 months ago

Hi @xudh1991 Thank you for your interest in DEVIANT.

I used my own training data to train the model and found that some targets could never be trained. The projected 3D coordinates exceeded the boundary, and so the target is not used in training.

You are correct. The projected 3D coordinates outside the image mean the camera does not see that particular 3D box. Therefore, detecting such 3D boxes is impossible with any image-based detector. Hence, those targets are not used in training.

Note that the datasets obtain and annotate 3D boxes using LiDAR or stereo images, which have a wider field of view (FoV) compared to a monocular camera. As such, some 3D boxes are usually outside the camera's FoV. The following figure (Courtesy: webgl) illustrates this point in the Bird Eye View. The LiDAR sees the top 3D box (rectangle) and therefore, 3D box appears in the annotated labels. The camera can not see this 3D box and DEVIANT codebase excludes such 3D box in training.

side_view_frustum

I think this position should be

pts_img = (pts_2d_hom[:, 0:2].T / pts_2d_hom[:, 2]).T

Thank you for noticing this bug. This code is from the GUPNet codebase. DEVIANT codebase is based on this codebase.

After changing this position, I still haven't achieved ideal results. May I ask if my thinking is correct?? If there is an error, please point it out. If it is correct, are there any other relevant positions that need to be changed? Why did I not achieve the desired result

Your thinking is absolutely spot on. The reason why the bug does NOT impact KITTI, Waymo and nuScenes datasets is the P2 calibration matrices of all these datasets have the second row (row index starts from zero) as [0, 0, 1, 0] which means the pts_rect_hom z-coordinate = pts_2d_hom z-coordinate and therefore, dividing by any of them leads to exact same result. As an example, consider sample calib matrices from the validation set of these three datasets:

Feel free to raise a PR for this issue. Also, feel free to post more questions and we will be happy to clarify further.

xudh1991 commented 10 months ago

Thank you very much for your reply. Your answer is very detailed, and I think I know how to solve this problem. In addition, I would like to ask another question, which appears in many 3D monocular object detection, but I have not quite understood it. If you feel that this question is too basic, you may not answer it, and I will also close this question 微信截图_20231226155524 The total loss in the algorithm is obtained by adding up multiple loss terms, However, adding the loss term of aa loss during the training process may result in negative values, as shown in the above figure. Will this simple summation method have a negative impact on the total loss?? loss

abhi1kumar commented 10 months ago

Your new question is unrelated to the current issue. Therefore, would you mind opening a new issue for this and I will answer this question.

PS: No question is too basic.