zzangjinsun / NLSPN_ECCV20

Park et al., Non-Local Spatial Propagation Network for Depth Completion, ECCV, 2020
MIT License
321 stars 57 forks source link

Questions about code. #29

Closed jikerWRN closed 3 years ago

jikerWRN commented 3 years ago

Thanks for your great work and elegant code! I have a few questions about code.

  1. I have the same confusion with zerowfz who mentioned the same question in issue#16, but i don't find the relevant explanation. The question is why we should set legacy to true for pre-trained models? Is there any difference between the pre-trained model and re-trained model? Would you explain the meaning of the following lines of code? Thanks!
if self.args.legacy:
    offset_tmp[:, 0, :, :] = offset_tmp[:, 0, :, :] + hh - (self.k_f - 1) / 2
    offset_tmp[:, 1, :, :] = offset_tmp[:, 1, :, :] + ww - (self.k_f - 1) / 2
  1. In the Affinity Normalization of paper, the parameter \ Lambda has a range (\Lambda_min <= \Lambda <= \Lambda_max), but it is only set as a learnable parameter in the code, i don't find a constraint on it. Does this have any effect on the result?

  2. In the test part of code, i think 'with torch.no_grad()' should be added to prevent memory leaks.

Looking forward to your reply, Thanks!

zzangjinsun commented 3 years ago

Hello, @jikerWRN . Thank you for your interest in my work.

  1. There was a small confusion during the development on offsets for confidences. After releasing the pre-trained models, I fixed the issue but I didn't update the pre-trained model. Thus I put the --legacy flag to support pre-trained models with fixed codes.

  2. (I think you mean \gamma) I tested with various gamma values (> 0) in the supplementary material (available in the ECVA website), but I didn't face any training problem. I think you can add some constraints by yourself in here: https://github.com/zzangjinsun/NLSPN_ECCV20/blob/ba33fa5d9ea62ca970026a145ab18fab76d79d4a/src/model/nlspnmodel.py#L88

  3. Thank you for your suggestion. I will add it in the next update.