DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement.
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
[X] I have searched the DAMO-YOLO issues and found no similar questions.
Before Asking
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
Question
作者您好,我目前尝试使用自己的数据在damo-yolo tiny上finetune。由于之前已经在yolov5s上进行过finetune,所以现在为了对比在damo-yolo上的效果,我在finetune的时候,将batch-size改为80、整个训练过程中都使用数据增强、训练30个epoch,这与我在yolov5上finetune的设定一致,其余训练超参数没有改变;测试时的conf_thre和nms_iou_thre也改为了和yolov5一致。但是最终在测试集上的结果,damo-yolo tiny的预测框会比yolov5s多很多,它们俩的预测的tp差不多,但是damo-yolo tiny预测的fp是yolov5s的7倍,导致mAP低了7个点左右。 我现在对于出现的这种情况有点困惑,不知道是什么导致了damo-yolo tiny中fp过多,是否模型还需要训练更多epoch? 如果您能有什么意见或者建议的话,我将非常感谢,期待您的回复。
Additional
No response