fangchangma / self-supervised-depth-completion

ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"
MIT License
623 stars 135 forks source link

Some Question about '6.4 On Input Sparsity' in your ICRA paper #8

Closed AbnerCSZ closed 5 years ago

AbnerCSZ commented 5 years ago

Hello!

Thank your for your gread work. When I read the paper, I met a question about chapter 6 On Input Sparsity.

In figure 6 you show the result when you trained with self-supervised framework, 'using both RGB and sparse depth yields the same level of accuracy as using sparse depth only'.

Could you tell me if the following guess is correct?

When we have only LiDAR sparse input, we have only 'depth Loss' and 'Smoothness Loss' during training. And Network Architecture in Figure 2 only have 32 channels LiDAR input.

If my guess is right, The input of your paper is Degenerate to the same as Sparsity Invariant CNNs(Only LiDAR). But in this case, your network gets better results. So how do you prove that it is the reason for Self-Supervised framework or Photometric Loss function, not because your network is optimized for lidar?

Thank you for your help!