ClementPinard / SfmLearner-Pytorch

Pytorch version of SfmLearner from Tinghui Zhou et al.
MIT License
1.01k stars 224 forks source link

Output doesn't make sense #103

Closed adizhol closed 3 years ago

adizhol commented 4 years ago

Hi!,

I run the pre-trained DispNet on a test set from KITTI (raw), and here is the inverse depth map and disparity. The depth map doesn't make sense... image image

ClementPinard commented 4 years ago

Depth map is very low, while the disparity is normalized. As such, Disparity might have a problem if you have a very far outlier (the normalization will make everything else flatter), and Depth will have a probleme if the scale is not fitted, which is the case here, I suspect that everything is below 1 depth unity (there is no proper unity to this figure, since the netwsork didn't lear scale)

You can setup the color scale of the depth map so that it is more readable by modifying the max depth here and putting something lower than 10 : https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/run_inference.py#L81

I should probably make it an option in the script argument

adizhol commented 4 years ago

Thanks,

So how did they get the pretty depth in the original paper? Or are they showing disparity?

Also when set a lower max_depth , I can see details. But it's very blurry and nothing like in the paper.

jilner commented 3 years ago

I met the same problem. Have you solved it?

adizhol commented 3 years ago

Make sure you're normalizing with mean and std of 0.5. I'm not sure that's what resolved the problem, though.