Closed paathelb closed 2 years ago
Hi, for this part we heavily rely on the OpenPCDet repo for calculating 3D APs. For that reason, the code regarding 3D AP is a bit messy, we will organize and release the codes with documentation in the future.
However, if you are in a hurry, it won't be too difficult to reimplement by yourself. The idea is simple, just translate MTrans predicted 3D boxes into the OpenPCDet detection formats, and then run their scripts for evaluation.
Thank you!
Hi, for this part we heavily rely on the OpenPCDet repo for calculating 3D APs. For that reason, the code regarding 3D AP is a bit messy, we will organize and release the codes with documentation in the future.
However, if you are in a hurry, it won't be too difficult to reimplement by yourself. The idea is simple, just translate MTrans predicted 3D boxes into the OpenPCDet detection formats, and then run their scripts for evaluation. Dear author, Has the code been updated? BTW, is it possible to conduct distributed training?
I tried to implement distributed training before. But I failed as there seems to be a problem with the Rotated_IoU module in terms of distributed training.
I tried to implement distributed training before. But I failed as there seems to be a problem with the Rotated_IoU module in terms of distributed training.
Thank you for your reply! I recently read your article Med-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning. Would like to ask, how to achieve 500/125 frame training PointPillars and PointRCNN? How do you change the OpenPCDet code? 我最近拜读了您的文章《MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning》,受益匪浅。 想请教一下,怎么实现用500/125帧训练PointPillars和PointRCNN呢?需要怎么修改OpenPCDet的代码呢?
Thank you for your reply! I recently read your article Med-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning. Would like to ask, how to achieve 500/125 frame training PointPillars and PointRCNN? How do you change the OpenPCDet code? 我最近拜读了您的文章《MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning》,受益匪浅。 想请教一下,怎么实现用500/125帧训练PointPillars和PointRCNN呢?需要怎么修改OpenPCDet的代码呢?
TimGor @.***
------------------ 原始邮件 ------------------ 发件人: "Cliu2/MTrans" @.>; 发送时间: 2024年2月15日(星期四) 中午12:04 @.>; @.**@.>; 主题: Re: [Cliu2/MTrans] evaluation/getting 3D AP (Issue #5)
I tried to implement distributed training before. But I failed as there seems to be a problem with the Rotated_IoU module in terms of distributed training.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Sorry, we haven't tried distributed training for this project yet, so this repo does not support DDP yet. As for calculating AP scores, we hardcode to pass the .txt output files of MTrans into the OpenPCDet code (link), function eval_one_epoch()
. The corresponding code is still a mess and sorry it might take us more time to add this functionality to this repo.
Dear author, I found a github repository which is created by paathelb @paathelb , https://github.com/paathelb/kitti-object-eval-python, the inside of the code seems to realize.
Thanks @paathelb for your nice work!
Dear author, I found a github repository which is created by paathelb @paathelb , https://github.com/paathelb/kitti-object-eval-python, the inside of the code seems to realize.
Sorry, we haven't tried distributed training for this project yet, so this repo does not support DDP yet. As for calculating AP scores, we hardcode to pass the .txt output files of MTrans into the OpenPCDet code (link), function
eval_one_epoch()
. The corresponding code is still a mess and sorry it might take us more time to add this functionality to this repo.
I'm terribly sorry, but I have another question to ask you Why is the number of pseudo_labels out there only 3387 instead of 3769? When you use OpenPCDet's code to compute AP3D, do you only compute these 3387?
There may be pseudolabels where there are no detections.
You can follow the section of Use psuedo labels to train 3D detectors in the FGR work here: https://github.com/weiyithu/FGR to understand how to use the pseudolabels (some empty) for downstream 3D det training
Hello! May I ask where is the code regarding the evaluation of results (get the AP 3D) for MTrans as 3D Object Detection model using full data?