Open NielsRogge opened 3 years ago
Thanks for the proposal. We have a few requests of moving stuff out of the references and into the library, so this proposal is in the same direction.
As with similar requests, we need to review the APIs that we have on the references and decide whether they are mature enough so that we commit to them. Unfortunately this use-case has one concern; adding this class in TorchVision will require putting an extra dependency due to the pycocotools
imports. Not sure that's the direction we would like to take it but I would like to hear also from @fmassa.
Great, otherwise I would perhaps make a small PyPi package that allows this, but I don't even know if this is allowed, I don't want to take credit for things that are not made by me of course (I would of course cite the torchvision authors and include the license).
🚀 Feature
It would be great if we could do the following:
Right now,
CocoEvaluator
cannot be imported as it's currently under the references directory in this repository, it's not part of thetorchvision
package.Motivation
I'm currently implementing DETR (end-to-end object detection with Transformers), and right now I have to copy all of this code of COCO evaluation in order to evaluate the model. The authors of DETR also copied a lot of the code for evaluation into their own repository. It would be great if we can simply import it, and run evaluation of a deep learning model.
Even in the official torchvision tutorial, they state that:
"In references/detection/, we have a number of helper functions to simplify training and evaluating detection models. Here, we will use references/detection/engine.py, references/detection/utils.py and references/detection/transforms.py. Just copy everything under references/detection to your folder and use them here." => life would be easier if users don't need to look into Github repos and copy files into their own folder. Also, there would be a central place (namely this repository) where the official COCO evaluation is defined, and can be updated in the future. Right now evaluation is cluttered across hundreds of Github repos.
This would also foster reproducability of experiments with object detection models, as right now it's a lot of work to just evaluate a model with metrics like mAP.