longzw1997 / Open-GroundingDino

This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.
MIT License
457 stars 71 forks source link

验证部分的代码是不是有问题?我的验证集mAP全部显示为-1 #73

Open Alan7ai opened 6 months ago

Alan7ai commented 6 months ago

IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

训练过程是正常的,模型也是正常的,我用自己写的推理代码结果都是正常的,但是使用你的验证代码所有mAP都显示为-1,是什么问题呢?

Alan7ai commented 6 months ago

第二个问题:在自己的数据集上微调时,冻结哪些模块可以获得不错的性能/训练速度?可以提供一些指示吗,感谢您

HZWHH commented 6 months ago

第二个问题:在自己的数据集上微调时,冻结哪些模块可以获得不错的性能/训练速度?可以提供一些指示吗,感谢您

请问微调需要多少显存,然后耗时怎样的呢?

TaoTXiXi commented 5 months ago

请问这个评判标准是不是有问题。我用不同方法训练了两个模型,前者比后者AP要高,但是测图片实际效果后者反而更好

andynnnnn commented 3 months ago

IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

训练过程是正常的,模型也是正常的,我用自己写的推理代码结果都是正常的,但是使用你的验证代码所有mAP都显示为-1,是什么问题呢?

兄弟,问题解决了吗? 请教下,如何做的

nuanxinqing commented 2 months ago

遇到了同样的问题 我在一个custom 单类mini数据集上训练评测都是正常的 但是扩展多类大数据集后,eval就全部是-1了 @longzw1997 @BIGBALLON @SahilCarterr