princeton-vl / RAFT-Stereo

MIT License
735 stars 141 forks source link

Question about the result on middlebury #56

Open superxi opened 2 years ago

superxi commented 2 years ago

Hi, Thanks open source for such a great job

I tried eval on MiddleEval3 training dataset with your model raftstereo-middlebury.pth , but the result is poor than scoreboard showed on https://vision.middlebury.edu/stereo/eval3/. How can I get the same precision on the website ?

I used default parameters in evaluate_stereo.py, The command used and the result are shown below. image

lahavlipson commented 2 years ago

These results you generated are the same that were submitted to the middlebury training dataset scoreboard. The difference is in the evaluation, which I believe prioritizes difficult image regions like those "with fine detail and/or lack of texture."

See: https://vision.middlebury.edu/stereo/eval3/MiddEval3-newFeatures.html