DepthAnything / Depth-Anything-V2

[NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
https://depth-anything-v2.github.io
Apache License 2.0
3.86k stars 336 forks source link

loss on Relative depth Model #137

Open PardisTaghavi opened 3 months ago

PardisTaghavi commented 3 months ago

Hi, Thanks for the great work.

Which loss function did you use for training the relative depth prediction model?

LiheYoung commented 2 months ago

We use two losses: the scale-shift-invariant loss and the gradient matching loss. Both of these are adapted from MiDaS (many thanks to MiDaS): https://gist.github.com/dvdhfnr/732c26b61a0e63a0abc8a5d769dbebd0.

Feobi1999 commented 1 month ago

We use two losses: the scale-shift-invariant loss and the gradient matching loss. Both of these are adapted from MiDaS (many thanks to MiDaS): https://gist.github.com/dvdhfnr/732c26b61a0e63a0abc8a5d769dbebd0.

请问对于一张生成的深度图(视差图), 在计算loss的时候 target是直接回归0-255之间的值吗?