naver / dust3r

DUSt3R: Geometric 3D Vision Made Easy
https://dust3r.europe.naverlabs.com/
Other
5.1k stars 556 forks source link

Use existing depth maps rather than infrencing #11

Closed DenTechs closed 6 months ago

DenTechs commented 7 months ago

Would it be possible to use existing depth maps, such as from an iPhone's lidar or from a stereo camera, rather than doing monocular estimation?

vincent-leroy commented 6 months ago

Hello @DenTechs , Yes you can use depthmaps from other sensors if they are aligned to your RGB images. Unfortunately our code does not support this since we only use RGB information. What I can suggest you is to find the transformation between your depths and the prediction of view 1 pred1['pts3d'] You can also apply this transformation to pred2['pts_3d_in_other_view']. This will allow you to recover relative camera poses between your views in the scale of the depthmaps.

If you want to perform global alignment, you might need to modify the scene optimizer in depth and adapt it to your use case. You can try to hard set depthmaps here https://github.com/naver/dust3r/blob/517a4301745c650730f65482fb6de9889c0b2db9/dust3r/cloud_opt/optimizer.py#L29 and disable gradient to keep them constant during alignment.

Good luck

Meky-hqd commented 4 months ago

Hello @DenTechs , Yes you can use depthmaps from other sensors if they are aligned to your RGB images. Unfortunately our code does not support this since we only use RGB information. What I can suggest you is to find the transformation between your depths and the prediction of view 1 pred1['pts3d'] You can also apply this transformation to pred2['pts_3d_in_other_view']. This will allow you to recover relative camera poses between your views in the scale of the depthmaps.

If you want to perform global alignment, you might need to modify the scene optimizer in depth and adapt it to your use case. You can try to hard set depthmaps here

https://github.com/naver/dust3r/blob/517a4301745c650730f65482fb6de9889c0b2db9/dust3r/cloud_opt/optimizer.py#L29

and disable gradient to keep them constant during alignment. Good luck

Thanks for you suggestion, I have tried to replace the depth from the real device in this way, but the output shows that the scale between pw_pts3d and my depth may not match. I also try to align the scale by using the pts3d in one image, but i doesn't work, Is there any good method to align the true depth with the model ouput?