Open mazeyu opened 3 years ago
@mazeyu Those metrics are implemented and used to evaluate flyingthings models. For middlebury metrics please refer to middelbury eval site. Currently the models are not being run correctly, and using a machine with older versions (Dec 2020) of python/tensorflow/numpy may help. I'm looking at what exactly causing the issue.
Thanks! Do you mean you also found such issue?
Yes, I've regenerated the .pb files using current codebase and it behaves closer to the paper. Older versions from Dec 2020 are available as middlebury_dXXX_v1.pb. The metrics implemented in the flyingthings script don't match the official ones from the middlebury website.
See this thread for more details. https://github.com/google-research/google-research/issues/613#issuecomment-903609552
Hi, I didn't get the reported accuracy according to your sh script and model. Where might be the problems? thanks for your help! On half resolution. The printed metrics are: Images processed: 15 psm_epe bad_0.1 bad_0.5 bad_1.0 bad_2.0 bad_3.0 [ 2.07099784 85.09916146 42.11792107 23.01151438 12.87441467 9.49672834]