Closed Uio96 closed 3 years ago
As the comment mentions, we don't have negative samples (i.e. images without any instances in them) in the dataset (so num_instances is always positive). However you comment is valid. Feel free to update the code and create a PR, then I'll accept it. Otherwise I'll update the eval in the next release (which will be early Feb).
As the comment mentions, we don't have negative samples (i.e. images without any instances in them) in the dataset (so num_instances is always positive). However you comment is valid. Feel free to update the code and create a PR, then I'll accept it. Otherwise I'll update the eval in the next release (which will be early Feb).
Hi @ahmadyan, thanks for the reply.
I was wondering if the reported numbers in your paper were following the current evaluation code? I think it has some problems as I mentioned in my original question. Do you also plan to re-evaluate your proposed methods (MobilePose & two-stage) in the next release?
definitely.
Thank you so much.
Hi there,
I think there is a problem in your evaluation code that you do not count the case where there is no prediction in the given input. So the final number may not fully reflect the truth.
https://github.com/google-research-datasets/Objectron/blob/aa667e689848aa3619e087b493ddb3b919f9e0c8/objectron/dataset/eval.py#L124-L169
In your code snippet, the instance represents ground truth while the box represents prediction. You try to match each prediction with one ground truth. But if there is no prediction (which means no match), you just skip this case. I think you should instead record that case as missing targets (you should still add the num_instances but do not update tp & fp).