Open luckyluckyjl opened 9 months ago
@luckyluckyjl Of course, because you have a specific fine-tuning order.
But if I reason according to the order of categories during fine-tuning, the result of the last category is still wrong. For example, in prompts: sedan . bus . bicycle . truck . excavator . Concrete truck . hazardous chemical truck. The detection results of last category is always wrong.
@hhaAndroid 你好,我遇到有类似的问题,不过我没有微调模型,直接使用你们预训练好的这个模型(MM-GDINO-L*/Swin-L/-/60.3/O365V2,OpenImageV6,ALL),我发现使用相同的提示词,只是每次使用的类别顺序不一样,就有不一样的结果,甚至顺序变换后有些目标就检测不出来了,这个是有什么讲究么(比如:car . road . lake . truck . 交换顺序:road . lake . car . truck .)
Thank you very much for reproducing the training code for grounding dino. When I was using it, I found that after fine-tuning, the position exchange of prompt words can cause errors in the detection results.