DepthAnything / Depth-Anything-V2

[NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
https://depth-anything-v2.github.io
Apache License 2.0
3.92k stars 342 forks source link

Difference? #176

Open zaoruis opened 1 month ago

zaoruis commented 1 month ago

Is there any difference in model structure between Depthanyst-V2 and the original DepthAnything? In downstream tasks, can it be used as two contrast methods?

LiheYoung commented 1 month ago

There is only one minor difference: in V1, we unintentionally used features from the last four layers of DINOv2 for decoding. In V2, we use intermediate features instead. This modification originates from this issue, but we find it indeed does not affect the model performance in our task.