wjf5203 / VNext

Next-generation Video instance recognition framework on top of Detectron2 which supports InstMove (CVPR 2023), SeqFormer(ECCV Oral), and IDOL(ECCV Oral))
Apache License 2.0
602 stars 53 forks source link

Cannot reproduce the same mAP_L result on the Youtube-VIS 2022 validation set #29

Open shulou1996 opened 2 years ago

shulou1996 commented 2 years ago

Hello! I trained IDOL using default swinL config yaml file, which only changes dataset from 19 to 21 and evaluate on Youtube-VIS 2022 validation set. I got nearly the same mAP_S but different mAP_L around 44, which is much lower than your mAP_L result 48.4 in your 1st solution working paper. Is there any problem? Thank you very much.

lirui-9527 commented 2 years ago

Hello brother! When I inferring the IDOL , I didn't get the evaluation. How did you get the evaluation value such as AP, can you share it? thank you very much!

jaideep19 commented 2 years ago

hey how did you train 2021 dataset as we cannot find annotations file for 2021 dataset in official link. Can you provide those annotation json files

15733171319 commented 1 year ago

Hello brother! When I inferring the IDOL , I didn't get the evaluation. How did you get the evaluation value such as AP, can you share it? thank you very much! Hello, classmates, has your problem been solved ? I also encountered the same problem, want to ask the following you

DUT-CSJ commented 1 year ago

Hello! I trained IDOL using default swinL config yaml file, which only changes dataset from 19 to 21 and evaluate on Youtube-VIS 2022 validation set. I got nearly the same mAP_S but different mAP_L around 44, which is much lower than your mAP_L result 48.4 in your 1st solution working paper. Is there any problem? Thank you very much.

Hello, I almost reproduce the result while having some questions about multi-scale testing. Have you tried training again subsequently?