Closed Beniko95J closed 4 years ago
Well...
I think the equation is not correct ...
Just 1.0 / predicted_inversedepth
and saving to PNG losses to many precisions.
BTW, the predicted seems not quite smooth due to there is no smooth loss during the training. I am now doing a project that achieves the best quality motion stereo estimation on DeMoN dataset and will open-source when all the training is done.
Thank you for the quick reply. That will be very nice. But currently do you have any advice to convert an inverse depth map to point cloud without much precision degeneration?
If you must save the map first, you can use numpy.save() to save in a float32 format.
By saving the inverse depth map in float32 format, I got a better result. Thank you for the advice and look forward to seeing your next work!
Hi, I am trying to convert the predicted depth map to point cloud but I meet some problems. I use the image pair in example2.py for test. As MVDepthNet predicts the metric inverse depth (0.02 ~ 2 m^-1), so I saved the inverse depth map normalized to 0~255 as a png image as follows. White means large inverse depth, which means the pixel is close to the camera.
I get the depth from the saved png depth map as follows
But when I check the generated point cloud, It looks very bad, like this:
It seems like that the depth is predicted very bad, while the depth map image looks good. Am I missing something to do the conversion correctly?
Best regards