mattpoggi / mono-uncertainty

CVPR 2020 - On the uncertainty of self-supervised monocular depth estimation
233 stars 24 forks source link

Question about self-teaching scheme in mono-uncertainty #8

Closed CSU-NXY closed 3 years ago

CSU-NXY commented 3 years ago

Hi, you mentioned that we can decouple depth and pose when modelling uncertainty with self-teaching strategy in the paper. But I cannot figure out why this scheme can provide appropriate uncertainty.

For example, if our teacher model is poorly trained and provides inaccurate depth estimation, with self-teaching we can still get depth uncertainty from student model. In this situation, can we trust this uncertainty?

mattpoggi commented 3 years ago

Hi, great question. I believe the uncertainty can be trusted if the teacher itself can be (reasonably) trusted. This is the case considered in the paper, where the teacher is used on the same dataset over which it has been trained. This basically means training twice the same network on the same dataset (KITTI in our case).

If the teacher has been trained elsewhere (e.g., NYU) I agree with you that the learned uncertainty might be not meaningful.

CSU-NXY commented 3 years ago

Thanks!

By the way, I'm curious about performance of teacher model and student model. In your experiments, student model is more accurate than teacher model. But in my mind, since student model learns from teacher model, the best situation is to predict exactly the same depth estimation as teacher model. Could you please explain a little more about it for me?

mattpoggi commented 3 years ago

I believe this is caused by the attenuation effect introduced by the NLL formulation on the total loss. Indeed, we also tried to simply train the student with a simple L1 and it achieved slightly worse results wrt the teacher.

CSU-NXY commented 3 years ago

I see. Thank you very much!