Open PardisTaghavi opened 3 months ago
We use two losses: the scale-shift-invariant loss and the gradient matching loss. Both of these are adapted from MiDaS (many thanks to MiDaS): https://gist.github.com/dvdhfnr/732c26b61a0e63a0abc8a5d769dbebd0.
We use two losses: the scale-shift-invariant loss and the gradient matching loss. Both of these are adapted from MiDaS (many thanks to MiDaS): https://gist.github.com/dvdhfnr/732c26b61a0e63a0abc8a5d769dbebd0.
请问对于一张生成的深度图(视差图), 在计算loss的时候 target是直接回归0-255之间的值吗?
Hi, Thanks for the great work.
Which loss function did you use for training the relative depth prediction model?