AruniRC / detectron-self-train

A PyTorch Detectron codebase for domain adaptation of object detectors.
MIT License
118 stars 21 forks source link

Can you provide a separate evaluation code? #12

Closed tqvinhcs closed 5 years ago

tqvinhcs commented 5 years ago

Hi, As far as I understand, the evaluation is done on the fly you run the detection. That means we can not run a different model from another source (i.e. tensorflow). Can you provide the evaluation script that is able to evaluate the box prediction only?

For example, I have a separate tensorflow model that output the 'bbox_bdd_peds_val_results.json' And I want to evaluate this result file on the ground truth 'bdd_peds_val.json'. That means I do not have to run your detection script. It just like: evaluate.py --gt bdd_peds_val.json --pred bbox_bdd_peds_val_results.json

Thank you

AruniRC commented 5 years ago

Hi,

that is definitely a useful script and we will work to have a quick demo out. In the meantime, can you take a look at this script, which does exactly what you asked for (eval detections versus ground-truth annotations, both in the MS-COCO JSON format):

https://github.com/AruniRC/detectron-self-train/blob/master/tools/evaluate_json.py

It is not very well documented, but hopefully it can point you in the right direction.