DepthAnything / Depth-Anything-V2

[NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
https://depth-anything-v2.github.io
Apache License 2.0
3.86k stars 336 forks source link

pointcloud visualization #39

Closed edgarriba closed 4 months ago

edgarriba commented 4 months ago

hi! is there any example or experiment that shows how the pointcloud looks like ?

kaixin-bai commented 4 months ago

The MDE task can only output 0-1 relative depth values, which cannot be converted to 3D information without the exact camera intrinsic and scale information, and I remember that the paper mentioned that if you want to convert to 3D you need to finetune on a dataset with scale information.

edgarriba commented 4 months ago

thanks!

ducha-aiki commented 4 months ago

Actually there is a metric depth version. Better than ZoeDepth, worse than Metric3D https://github.com/DepthAnything/Depth-Anything-V2/tree/main/metric_depth