Closed yongliu20 closed 3 years ago
That doesn't look right... Can I see some output images?
Specifically, you can compare them with our pre-computed results.
The results of 'blackswan' are these:
I also think this J&F result is not right but I can not find out what went wrong...
"blackswan" will almost always be good because it is so easy... Can you download our pre-computed results and see if there are any differences? I want to check whether the problem lies in generation or evaluation.
Ok, I will do it now
There are indeed some differences between them but very small. And I test your pre-computed results, the J&F is: The evaluation method which I use is from davis.
I find that my result is as same as the D16_s012. And the above result belongs to D16_s012_notop.
DAVIS 2016 evaluation code is not very well maintained. It was not easy to get it right for me back then... I recalled there are some discussion threads about a proper implementation. I am going to check on those and get back to you.
Here: https://github.com/davisvideochallenge/davis2017-evaluation/issues/4 Hope it helps.
You can always check the numbers with ours/STM's. I might open source my own evaluation code later.
Well, I have solved this problem by modifying your eval_davis_2016.py, like this:
Well yeah if your evaluation script expects 0/1 outputs... The ground truths in DAVIS 2016 are 0/255 so I'm sticking with that. Glad that it has been fixed.
Hmm I think I can actually modify the code a bit to make both happy. Gonna do that. Thanks.
My previous problem is that the output values of the foreground pixels are not same. And thanks for your help!
I want ask why I get this result using your pre-trained model? Thanks!