vt-vl-lab / 3d-photo-inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
https://shihmengli.github.io/3D-Photo-Inpainting/
Other
6.9k stars 1.11k forks source link

Using the true depth maps that come with datasets #129

Open VjayalakshmiK opened 3 years ago

VjayalakshmiK commented 3 years ago

I want to test your model on a dataset that provides ground truth depth maps along with frames and pose information and hence, I feed this true depth instead of the estimated depth from MiDAS. For indoor scenes, I face no problem. On outdoor scenes, portions like sky and far off buildings get rendered as grey, ie, these visible regions (not disocclusions) are affected in the rendering process. I was able to do away with this by saturating the depth map to lower values. The scenes that have this problem have maximum depth values ranging from 65 to 10^10 and if I saturate depth[depth > 100] = 100 the far off regions arent rendered grey, but the error in warping is significant in the form of offset. For higher thresholds, grey region problem persists. So, I am wondering if the format in which I feed the depth is correct. If I feed external depth, can I feed the depth as such, or is there any preprocessing that is required?