Closed stalin18 closed 6 years ago
Hi, i think this would be quite complicated and probably hurt performance a lot, as it already seems to be the bottleneck operation in terms of performance. Other implementations (e.g. the NVIDIA FlowNet implementation) also directly implement it as CUDA code.
For 1D correlation, there is a reference implementation here: https://github.com/lmb-freiburg/flownet2/tree/master/src/caffe/layers/correlation_layer1d.cu.
Hi, thanks for you reply! I agree about the complication (implementation) and performance part for 2D correlation. How about the learning process, because in the CUDA code I see backward pass implementation as well (which has something to with gradient computation)?
Regarding 1D correlation, I think it can be simply computed by shifted feature maps of one (say, right) image and computing element-wise dot product with feature maps of other image (say, left). I am really not sure about the backward pass implementation and gradient computation part in CUDA implementation.
The backward pass needs to be specified for any custom op if you want to do backprop through it.
Hi, I would like to thank you for making this source code available, this really helped me to implement 1D correlation layer for stereo disparity estimation. Thanks a lot!
Hi, I am quite new to TensorFlow. I was wondering, why do we need CUDA implementation for this, I mean can't we code a correlation function using Python / TensorFlow APIs?
Also, for performing 1D correlation operation in case of stereo disparity estimation, is it okay to just compute dot products of corresponding elements of left / right feature maps (kernel size is thus, 1x1)?