Closed ghost closed 2 years ago
This requires some changes in cocoapi repo which isn't part of tensorflow/models, so include_metrics_per_category doesn't work externally.
@pkulzc If this is not supported by COCO, at least OID metrics should support per category report. That does not work at all with new model_main.py.
So how can we get the mAP for every class with the tf detection api if it doesn't work with model_main.py?
Hi There, We are checking to see if you still need help on this, as this seems to be considerably old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing. If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.
Is there any update on this issue?
This is a stale issue. Please check with recent TF versions of models and open a new issue if this is still an issue with new models. Thanks!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.
Please go to Stack Overflow for help and support:
http://stackoverflow.com/questions/tagged/tensorflow
Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:
Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
System information
''' def init(self, scales=(0.5, 1.0, 2.0), aspect_ratios=(0.5, 1.0, 2.0), ''' and '''
Handle argument defaults
'''
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
You can obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
This is not a bug; however, I am trying to use some function that is probably innate in tensorflow object detection API or MS COCO evaluation.
I use customized training and evaluation set. It has two classes; 'vehicle' and 'heavyVehicle'. When I run the code above, the results show like below.
However, what I want to see is the precision metric for each of two classes. And also, it will be greatful if I can store the result as a txt file.
I guess it is something related with 'coco_tools.py' and 'coco_evaluation.py' . These are on 'models/research/object_detection/metrics' folder.
There are some lines that seems to be affect the evaluation results. Such as 'include_metrics_per_category' or 'all_metrics_per_category'.
There might be more jobs needed to get a result. Simply changing those two factor into 'True' does not work. It produces 'Category stats do not exist' error.
Could you please help me? Thank you.
Source code / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.