Open wonozlo opened 2 years ago
same question 👍
@wonozlo I think the alt_cuda_corr is only an efficient implementation that are used for testing memory occupation when processing high-resolution input and the authors did not use this implementation for training so the backward function of alt_cuda is never called. By the way, i remember the backward implementation is wrong.
I don't how the bug happened but I fixed this in this PR: https://github.com/princeton-vl/RAFT/pull/186
Hi,
Thanks for such a good research and well organized code! I have one question about alt_cuda_corr implementation. In the code, alt_cuda_corr is used as
alt_cuda_corr.forward(fmap1,fmap2,coords,radius)
However, backward call is not used in the code. I saw that multi gpu run also reports unused parameter, which implies that gradient is not fully propagated to the model. Please tell me if there is something I missed!Thanks