dbolya / tide

A General Toolbox for Identifying Object Detection Errors
https://dbolya.github.io/tide
MIT License
702 stars 115 forks source link

How to implement TIDE for custom dataset? #19

Open LIKHITA12 opened 3 years ago

LIKHITA12 commented 3 years ago

In my dataset i have one half as COCO dataset and other half as custom added dataset. So now, how should i check performance of model? Can you please explain in step by step?

TommyZihao commented 3 years ago

same question

RizhaoCai commented 3 years ago

Same question

nikolaassteenbergen-tomtom commented 3 years ago

+1

BaofengZan commented 3 years ago

+1

drewm1980 commented 2 years ago

pycocotools' dataset abstraction does not provided an API that's ergonomic for non-COCO data, but this library does; it's why I'm trying it out. Make a Data instance for you ground truth data, and one for your test data, fill them up using the add_* methods, and then trigger evaluation. I just noticed that Tide does NOT rle compress masks as you add them, so beware of high RAM usage. Data.add_detection and friends to NOT rle compress masks as you add them... hopefully it won't complain if you compress them using calls to pycocotols before passing them in.

vjsrinivas commented 2 years ago

Any updates regarding adding your own custom dataset drivers?

vjsrinivas commented 2 years ago

If someone wants an example of what @drewm1980 had mentioned to get a custom dataset driver working, here's an example of another non-COCO-related dataset working (VOC2007): https://gist.github.com/vjsrinivas/56ca6e209adf23be17b9d2266b288c71

It's still calculating the 101-point interpolation in COCO AP so it is not equal to VOC2007 mAP eval, but you can probably adjust it to get the same output values.