tensorflow / models

Models and examples built with TensorFlow
Other
77.18k stars 45.76k forks source link

Get multi class detection precision in COCO evaluation #5846

Closed ghost closed 2 years ago

ghost commented 5 years ago

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

''' def init(self, scales=(0.5, 1.0, 2.0), aspect_ratios=(0.5, 1.0, 2.0), ''' and '''

Handle argument defaults

if base_anchor_size is None:
  base_anchor_size = [128, 64]
base_anchor_size = tf.to_float(tf.convert_to_tensor(base_anchor_size))

'''

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.

This is not a bug; however, I am trying to use some function that is probably innate in tensorflow object detection API or MS COCO evaluation.

I use customized training and evaluation set. It has two classes; 'vehicle' and 'heavyVehicle'. When I run the code above, the results show like below.

image

However, what I want to see is the precision metric for each of two classes. And also, it will be greatful if I can store the result as a txt file.

I guess it is something related with 'coco_tools.py' and 'coco_evaluation.py' . These are on 'models/research/object_detection/metrics' folder.

There are some lines that seems to be affect the evaluation results. Such as 'include_metrics_per_category' or 'all_metrics_per_category'.

There might be more jobs needed to get a result. Simply changing those two factor into 'True' does not work. It produces 'Category stats do not exist' error.

Could you please help me? Thank you.

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

pkulzc commented 5 years ago

This requires some changes in cocoapi repo which isn't part of tensorflow/models, so include_metrics_per_category doesn't work externally.

netanel-s commented 5 years ago

See this: https://github.com/tensorflow/models/issues/4778#issuecomment-430262110

adityapatadia commented 5 years ago

@pkulzc If this is not supported by COCO, at least OID metrics should support per category report. That does not work at all with new model_main.py.

corner200 commented 5 years ago

So how can we get the mAP for every class with the tf detection api if it doesn't work with model_main.py?

tensorflowbutler commented 4 years ago

Hi There, We are checking to see if you still need help on this, as this seems to be considerably old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing. If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.

zishanahmed08 commented 4 years ago

Is there any update on this issue?

jvishnuvardhan commented 2 years ago

This is a stale issue. Please check with recent TF versions of models and open a new issue if this is still an issue with new models. Thanks!

google-ml-butler[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

google-ml-butler[bot] commented 2 years ago

Closing as stale. Please reopen if you'd like to work on this further.