Closed kts707 closed 3 years ago
You can set this to 0.0001 when calling the tf op. I will re-build the pip package to fix this.
Alternatively, you can submit to the leaderboard (validation set) to get accurate numbers.
@peisun1115 Thanks for the quick response!
I also set desired_recall_delta
to 0.0001 and re-build the command line tool. If I use desired_recall_delta = 0.0001
for both Tensorflow Op and command line tool, then their results are matching now. 👍
However, the results from the leaderboard (online validation set evaluation server) are still the same as the case when I am using desired_recall_delta = 0.05
in command line tool (exactly same as the one shown above from command line tool), and they are not matching with the results when desired_recall_delta = 0.0001
. I guess the online evaluation server's desired_recall_delta
also needs to be updated.
Thanks for your help again!
Hi all,
Thanks for this great dataset!
I am running evaluations for 3D Detection on Validation Set, and I am seeing some consistent discrepancies for all my models between the results from command line tool and the results from Tensorflow metrics op as below:
For example, this is the result from command line tool:
And this is the result from Tensorflow op for the same model:
The gaps are large for vehicles and near-range objects. I am wondering where these consistent discrepancies come from. Is there any difference between the two tools?
Thanks a lot!