zhenyuw16 / UniDetector

Code release for our CVPR 2023 paper "Detecting Everything in the Open World: Towards Universal Object Detection".
Apache License 2.0
526 stars 24 forks source link

Can you release the perfoemance on 13 OdinW datasets like GLIP? #13

Open Kegard opened 1 year ago

Kegard commented 1 year ago

I have used the checkpoint "end-to-end-stage" released on other dataset. and my result is this:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.012
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.025
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.011
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.015
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.513
OrderedDict([('bbox_mAP', 0.012), ('bbox_mAP_50', 0.025), ('bbox_mAP_75', 0.011), ('bbox_mAP_s', -1.0), ('bbox_mAP_m', 0.0), ('bbox_mAP_l', 0.015), ('bbox_mAP_copypaste', '0.012 0.025 0.011 -1.000 0.000 0.015')])

i want to know whether the code is erro or the result is really bad. Or can you release the code inference on OdinW datasets?

suekarry commented 12 months ago

I have used the checkpoint "end-to-end-stage" released on other dataset. and my result is this:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.012
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.025
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.011
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.015
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.513
OrderedDict([('bbox_mAP', 0.012), ('bbox_mAP_50', 0.025), ('bbox_mAP_75', 0.011), ('bbox_mAP_s', -1.0), ('bbox_mAP_m', 0.0), ('bbox_mAP_l', 0.015), ('bbox_mAP_copypaste', '0.012 0.025 0.011 -1.000 0.000 0.015')])

i want to know whether the code is erro or the result is really bad. Or can you release the code inference on OdinW datasets?

Excuse me, have you solved your problem?

Kegard commented 12 months ago

I have changed my dataset and test again, then I get a normal result.

suekarry commented 12 months ago

I have changed my dataset and test again, then I get a normal result.

Thanks your reply!!1. What is the meaning of changing the data set, modifying the validation set part?2、When you have -1 your loss is normally decreasing and converging, right?

Kegard commented 12 months ago
  1. I changed all dataset ,contains train set and val set.
  2. I just have made a zero-shot on other dataset, so I haven't train the model. if you can't solve the problem, I think you can change your dataset have a test. I remember the documents of mmdet have explain why -1 happend. you can read it.
suekarry commented 12 months ago
  1. I changed all dataset ,contains train set and val set.
  2. I just have made a zero-shot on other dataset, so I haven't train the model. if you can't solve the problem, I think you can change your dataset have a test. I remember the documents of mmdet have explain why -1 happend. you can read it. I see, Thank you! but I don't know where the mmdet's explain.I have checked the comments section on mmdet's github and searched in mmdet's manual
Kegard commented 12 months ago

I cann't find the link, but I remebered that there is some problem with your dataset if mAP=-1

suekarry commented 12 months ago

I cann't find the link, but I remebered that there is some problem with your dataset if mAP=-1

thanks your reply,I across change the iou_threshold=None in coco.py to slove this problem.(thank you again~)