open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.59k stars 9.46k forks source link

How to evaluate with COCO style using CustomDataset? #145

Closed xuw080 closed 5 years ago

xuw080 commented 5 years ago

I prepared my custom dataset using CustomDataset object, however, when I was testing my model on custom datasets, this error will appear:

Traceback (most recent call last): File "tools/test.py", line 124, in main() File "tools/test.py", line 112, in main results2json(dataset, outputs, result_file) File "/home/Xwang/anaconda3/envs/mmdetection/lib/python3.7/site-packages/mmdet-0.5.4+65a2e5e-py3.7.egg/mmdet/core/evaluation/coco_utils.py", line 142, in results2json json_results = det2json(dataset, results) File "/home/Xwang/anaconda3/envs/mmdetection/lib/python3.7/site-packages/mmdet-0.5.4+65a2e5e-py3.7.egg/mmdet/core/evaluation/coco_utils.py", line 106, in det2json img_id = dataset.img_ids[idx] AttributeError: 'CustomDataset' object has no attribute 'img_ids'

If we can not use coco_utils for testing, due to CustomDataset doesn't have img_ids. How do we test CustomDatasets with coco style evaluation methods?

Thanks

xuw080 commented 5 years ago

Also, it seems like that this codes will always use custom.py imported from "/home/Xwang/anaconda3/envs/mmdetection/lib/python3.7/site-packages/mmdet-0.5.4+65a2e5e-py3.7.egg/mmdet/datasets/custom.py", therefore, it is very hard for us to make any changes, although you provide custom.py inside mmdetection/mmdet/datasets, you will never use it in your codes. We have to change codes inside /home/Xwang/anaconda3/envs/mmdetection/lib/python3.7/site-packages/mmdet-0.5.4+65a2e5e-py3.7.egg/mmdet/datasets/custom.py, which is super inconvenient. Not sure whether there are some easier methods?

Thanks

hellock commented 5 years ago

For the first question, if you want to evaluate with COCO apis, then you may need to convert your dataset to coco annotation formats and then use CocoDataset instead. If CustomDataset is preferred, you can write a evaluation hook with methods provided in mean_ap.py.

For the second one, if you installed mmdetection with pip install ., you can just modify the codes in the current folder and run pip install . again. Or if you installed it with pip install -e ., then you just need to make any modifications you want in the current folder. In both cases, there is no need to modify codes under the anaconda environments.

xuw080 commented 5 years ago

Will try it, thanks for your kind reply.

Curry1201 commented 5 years ago

Hi @xuw080 ,I have encountered the same error as you. How do you solve it specifically? AttributeError: 'CustomDataset' object has no attribute 'img_ids'

xuw080 commented 5 years ago

I converted my datasets label format to be coco style. This may be the easiest method for solving it.

wangg12 commented 5 years ago

Converting to coco style is not very convenient in many situations, I think it would be better to have such evaluation functions to evaluate CustomDataset as what the cocoapi does.

hellock commented 5 years ago

@wangg12 Actually, the voc_eval.py script can work for custom datasets, though it only calculate AP0.5. You can use a for loop to call the eval_map() method at L35 to calculate AP[0.5:0.95] and it is usually fast. Note that the implementation detail for cocoapi and our eval_map() is slightly different, eval_map() follows the standard VOC implementation.

wangg12 commented 5 years ago

@hellock I wonder how the implementation difference would affect the evaluation result if I use eval_map() instead of cocoapi?

hellock commented 5 years ago

Some very detailed issues such as how to count the true positives and false positives, and how to use the ignore regions. The evaluation codes provided by different benchmarks (COCO, VOC07, VOC12, ImageNet) have minor differences. Usually the mAP of different evaluation is small. Our implementation of eval_map() covers different metrics of VOC07, VOC12, ImageNet, and can obtain exactly the same results as the official code. COCO is not reimplemented since cocoapi is already in written in python and we don't have enough time, it is more complicated :)

wangg12 commented 5 years ago

@hellock It seems eval_map() does not support evaluation of instance segmentations.

hellock commented 5 years ago

@wangg12 Yes it only supports bbox evaluation now.

gavrin-s commented 5 years ago

How to create result_file for voc_eval? Which format?

mathmanu commented 5 years ago

You may be passing the --eval argument while running tools/test.py. Do not do that to evaluate results on voc. Instead write the results to a pkl file by using --out results.pkl. Then evaluate the results using tools/voc_eval.py