DerrickXuNu / OpenCOOD

[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.
https://mobility-lab.seas.ucla.edu/opv2v/
Other
663 stars 99 forks source link

why the eval result mAP that i run out is always about 0.005? #69

Closed IndigoChildren closed 1 year ago

IndigoChildren commented 1 year ago

图片

DerrickXuNu commented 1 year ago

Does this just happen to Where2comm or any other model?

IndigoChildren commented 1 year ago

any other model, actually i think maybe there is some wrong in the eval_utils.py, because it puts all the prediction results together, but the confidence score descend operation is used in every single batch.

DerrickXuNu commented 1 year ago

No, I don't think eval_util has an issue, as it runs well on several of my machines, and there are already many research groups using this without any issue. What is your spconv version?

IndigoChildren commented 1 year ago

spconv-cu113, 2.3.3

DerrickXuNu commented 1 year ago

try pytorch 1.12.0 + spconv 2.2.6

DerrickXuNu commented 1 year ago

also, can you show your inference command?

socialism-redstar commented 1 year ago

python opencood/tools/inference.py --model_dir opencood/opencood/checkpoints/point_pillar_where2comm_v2xset --fusion_method 'intermediate'

IndigoChildren commented 1 year ago

I try pytorch 1.12.0 + spconv 2.2.6 again, but get a same result. And i try to inference a single frame, superiseingly get a correct result about 0.88.

IndigoChildren commented 1 year ago

图片

DerrickXuNu commented 1 year ago

@Little-Podi have you met this before? Could you have a try to see whether you will get such result with current opencood version? Many thanks!

Little-Podi commented 1 year ago

I never met this issue before. I can test the up-to-date version within this week. I'm taking a vacation in mainland right now ^_^. @IndigoChildren Have you tried the previous version of OpenCOOD? I have verified the codebase just 3 or 4 commits back. I'm using pytorch 1.13 + cuda 11.6 + spconv 2.2.6.

IndigoChildren commented 1 year ago

Totally, I only use one version. Thanks a lot for you spending time to deal with my problem, have a nice trip in mainland. By the way, welcome to ShenZhen! It's really hard to get a concert ticket of Jay Zhou.

Little-Podi commented 1 year ago

Haha, I know! I have never successfully got one even there are some many concerts hold in Hong Kong. I just tested the evaluation code today and found no problem in the current version. Did you customize sth?

IndigoChildren commented 1 year ago

I try to train a fcooper model myself, and i get a same bad validation result, i dont know what happened to my code.

IndigoChildren commented 1 year ago

I will try the current version and thanks for your help.

IndigoChildren commented 1 year ago

@Little-Podi 我昨天试了一下最新版本的代码,然后试着train了一个point_pillar_fcooper.yaml这个文件,各种配置都没有更改,训完以后用logs里面的文件在test集上测试了一下精度,发现精度几乎为0。请问能不能麻烦您也试着这样做一下,为什么我一直出bug,真的绷不住了。

IndigoChildren commented 1 year ago

而且在训这个fcooper模型的时候,好几次validation_loss直接跑飞了

IndigoChildren commented 1 year ago

图片

sidiangongyuan commented 1 year ago

而且在训这个fcooper模型的时候,好几次validation_loss直接跑飞了

解决了这个问题吗? 我现在也有这个问题。 在CoCa3D 上 的效果非常差