open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.25k stars 9.41k forks source link

I think the matching of "analysis result" is not correct #7373

Open CheungBH opened 2 years ago

CheungBH commented 2 years ago

I am using your code https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/analyze_results.py to evaluate the performance of single samples. However, image Here's the images in "good" folder, whose mAP are all 1.0

BIGWangYuDong commented 2 years ago

what code did you use? please use issue template to give us more details

CheungBH commented 2 years ago

I am using your code

_python tools/analysis_tools/analyze_results.py \ ${CONFIG} \ ${PREDICTION_PATH} \ ${SHOWDIR} \

and the code may generate the topk highest and lowest mAP images. However, I saw the images in "good" folder are performed badly.

CheungBH commented 2 years ago

Here's my environment

sys.platform: linux Python: 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] CUDA available: True GPU 0: GeForce GTX 1080 with Max-Q Design CUDA_HOME: /usr NVCC: Cuda compilation tools, release 7.5, V7.5.17 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~16.04) 9.4.0 PyTorch: 1.6.0+cu101 PyTorch compiling details: PyTorch built with:

TorchVision: 0.7.0+cu101 OpenCV: 4.4.0 MMCV: 1.4.6 MMCV Compiler: GCC 7.3 MMCV CUDA Compiler: 10.1 MMDetection: 2.21.0+f9641f0

BIGWangYuDong commented 2 years ago

I am using your code

_python tools/analysis_tools/analyze_results.py ${CONFIG} ${PREDICTION_PATH} ${SHOWDIR}

and the code may generate the topk highest and lowest mAP images. However, I saw the images in "good" folder are performed badly.

I'll have a check. One more question, what dataset did you use?

BIGWangYuDong commented 2 years ago

According to these lines: https://github.com/open-mmlab/mmdetection/blob/70f6d9cfade4a2f0b198e4f64776521d181b28be/mmdet/core/evaluation/mean_ap.py#L676-L680

it will ignore the classes that do not have GT when calculating mAP. This may get mAP=1 but have some other classes in the image.

While evaluating whole test/val data, the num_gt is always > 0, but when evaluating a single image, this may get some error. Right now, If you would like to get some good results, it is suggested to delete if cls_result['num_gts'] > 0 when evaluating the single image result.

The potential error will be fixed after discussion.

BIGWangYuDong commented 2 years ago

Kindly ping @ZwwWayne and @hhaAndroid have a look

CheungBH commented 2 years ago

According to these lines:

https://github.com/open-mmlab/mmdetection/blob/70f6d9cfade4a2f0b198e4f64776521d181b28be/mmdet/core/evaluation/mean_ap.py#L676-L680

it will ignore the classes that do not have GT when calculating mAP. This may get mAP=1 but have some other classes in the image.

While evaluating whole test/val data, the num_gt is always > 0, but when evaluating a single image, this may get some error. Right now, If you would like to get some good results, it is suggested to delete if cls_result['num_gts'] > 0 when evaluating the single image result.

The potential error will be fixed after discussion.

Thank you for your reply. Waiting for the fixing

hhaAndroid commented 2 years ago

@CheungBH The currently implemented strategy is relatively simple. The specific implementation is obtained by calculating mAP for each image and sorting it, so it is normal for the good folder map to be all 1.

CheungBH commented 2 years ago

@CheungBH The currently implemented strategy is relatively simple. The specific implementation is obtained by calculating mAP for each image and sorting it, so it is normal for the good folder map to be all 1.

Thank you for your reply, but sorry that it is not the point. The point is that the "mAP=1" sample is obviously performing badly, which is shown in the image. The selected samples for "good" is not actually good