Open 1170300714 opened 3 months ago
You need to compare the text json
and the validation annotation file.
Thanks for you reply.
In fact I use the obj365v1_class_texts.json
which you release as the text json, and the use the objects365_val.json which is released from the obj365 official website as the validation annotation file.
Further more, I have checked the class name, index and order of these two files, which are exactly matched with each other:
the text json file:
the anno file:
Hi @1170300714, I've checked this bug. You need to sort the categories of objects365 first since the categories are not consistent between train
and val
.
Please modify the evaluation metric as follows:
val_evaluator = dict(type='mmdet.CocoMetric',
ann_file='data/objects365v1/annotations/objects365_val.json',
metric='bbox',
sort_categories=True,
format_only=False)
test_evaluator = val_evaluator
You will obtain the right results, for example (YOLO-World-v2-L):
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.266
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.354
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.290
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.132
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.298
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.415
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.292
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.507
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.538
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.348
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.598
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.726
Visualization examples:
Thanks for your help! It works~
sort_categories=True
您好,感谢您出色的工作!!我在复现Yolo-world在LVIS mimi上的zero-shot指标也出现类似的问题。我的部分配置如下: 这里的LVIS 就是 Coco val2017 验证数据集,请问是否也要设置sort_categories=True?还是有其他配置问题呢? coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5LVISV1Dataset', data_root='/mnt/afs/huangtao3/wzz/YOLO-World/pretrain_data/LVIS', test_mode=True, ann_file='/mnt/afs/huangtao3/wzz/YOLO-World/pretrain_data/LVIS/lvis_v1_minival_inserted_image_name.json', data_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/lvis_v1_class_texts.json', pipeline=test_pipeline) val_evaluator = dict(type='mmdet.LVISMetric', ann_file='/mnt/afs/huangtao3/wzz/YOLO-World/pretrain_data/LVIS/lvis_v1_minival_inserted_image_name.json', metric='bbox') 结果如下: 2024/05/04 00:24:47 - mmengine - INFO - Evaluating bbox... 2024/05/04 00:26:33 - mmengine - INFO - Epoch(test) [4809/4809] lvis/bbox_AP: 0.0230 lvis/bbox_AP50: 0.0320 lvis/bbox_AP75: 0.0250 lvis/bbox_APs: 0.0160 lvis/bbox_APm: 0.0380 lvis/bbox_APl: 0.0670 lvis/bbox_APr: 0.0000 lvis/bbox_APc: 0.0000 lvis/bbox_APf: 0.0480 data_time: 0.0006 time: 1.5346 配置文件是:yolo_world_v2_l_clip_large_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py
Thanks for your great work!
I wants to evaluate the performance of yolo_world_s_clip_base_dual_vlpan_2e-3adamw_32xb16_100e_o365_goldg_train_pretrained-18bea4d2.pth on val set of obj365v1.
I modify the config of configs/pretrain_v1/yolo_world_s_dual_vlpan_l2norm_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py to follows:
and test with the command:
But get very low performance: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.002 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.003 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.002 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.004 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.007 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.014 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.015 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.007 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.019 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.032 04/03 20:50:33 - mmengine - INFO - bbox_mAP_copypaste: 0.002 0.003 0.002 0.000 0.002 0.004 04/03 20:51:50 - mmengine - INFO - Results has been saved to results.pkl.
I think it some mistake of our rewriten confif, so could you please help me to check the config? Thanks a lot!