vt-vl-lab / 3d-photo-inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
https://shihmengli.github.io/3D-Photo-Inpainting/
Other
6.9k stars 1.11k forks source link

LeRes method #136

Open scratcher28 opened 2 years ago

scratcher28 commented 2 years ago

I've tried to switch BoostingMonocularDepth to new LeRes method (instead of MiDas) and got odd results - major unsync between foreground and background. Any ideas how to fix that?

Klanly commented 2 years ago

Since there is >>>Note that MiDas-v2 and SGRnet estimate inverse depth while LeReS estimates depth.<<< on BoostingMonocularDepth's readme, so you may add sth like

if algo == 2:
          depth=65535.0-depth

into boostmonodepth_utils.py to use depthNet 2 algo.

My colab notebook: https://colab.research.google.com/drive/1fVsU6DUbgO5BkU0ws20A8odkZllYDkht

Patched 3d-photo-inpainting & BoostingMonocularDepth, including ...mesh.node[... => ...mesh.nodes[... patch which stats here, mount on (((gdrive)))/ttmmpp/ML/3d-photo-inpainting. https://drive.google.com/drive/folders/1euIX6aoJ4k1mxQMfIhZ5VWlLSTktEPer

donlinglok1 commented 1 year ago

Since there is >>>Note that MiDas-v2 and SGRnet estimate inverse depth while LeReS estimates depth.<<< on BoostingMonocularDepth's readme, so you may add sth like

if algo == 2:
          depth=65535.0-depth

into boostmonodepth_utils.py to use depthNet 2 algo.

...

Hello @Klanly , I just try to implement LeRes too,

https://github.com/vt-vl-lab/3d-photo-inpainting/pull/188/files

But I feel that the old 3d result(MiDas) is better than LeRes 3d result... is that some code I missed? (I will try your colab later, thank you for the sharing!)

[Update] I have tested your code, we have same result. So I guess this is how LeRes implement can do.