Hellow, thanks for your great work and contribution. The unification applies winner-take-all selection as the basis to generate the depth map for the next finner stage. However, torch.max() is a non-differentiable operation and the parameters in 3D-CNN would be optimized except the parameters w.r.t the selected depth hypothesis. How do the gradients backward in this case?
You are right! So we optimize the model through directly constraining the cost volume, but not the depth. You can refer to our paper or here for more details.
Hellow, thanks for your great work and contribution. The unification applies winner-take-all selection as the basis to generate the depth map for the next finner stage. However, torch.max() is a non-differentiable operation and the parameters in 3D-CNN would be optimized except the parameters w.r.t the selected depth hypothesis. How do the gradients backward in this case?