Closed adizhol closed 3 years ago
Depth map is very low, while the disparity is normalized. As such, Disparity might have a problem if you have a very far outlier (the normalization will make everything else flatter), and Depth will have a probleme if the scale is not fitted, which is the case here, I suspect that everything is below 1 depth unity (there is no proper unity to this figure, since the netwsork didn't lear scale)
You can setup the color scale of the depth map so that it is more readable by modifying the max depth here and putting something lower than 10 : https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/run_inference.py#L81
I should probably make it an option in the script argument
Thanks,
So how did they get the pretty depth in the original paper? Or are they showing disparity?
Also when set a lower max_depth , I can see details. But it's very blurry and nothing like in the paper.
I met the same problem. Have you solved it?
Make sure you're normalizing with mean and std of 0.5. I'm not sure that's what resolved the problem, though.
Hi!,
I run the pre-trained DispNet on a test set from KITTI (raw), and here is the inverse depth map and disparity. The depth map doesn't make sense...