Open gpetty opened 4 years ago
can be obtained in less than a minute using np.fft.fft2 and np.fft.ifft2
Or using signal.correlate
with method=fft
.
correlate2d
and convolve2d
should be sped up to use the FFT method, too, with a method = {direct, fft, auto}
parameter.
I thought there was an issue for this already, but I guess this is it. (Issue https://github.com/scipy/scipy/issues/13857 can be for adding boundary
to convolve
/correlate
and this issue can be for adding method
to correlate2d
/convolve2d
)
Basically it would need to pad the input arrays with 1 cycle of whatever the boundary condition is. So:
boundary = wrap
: No padding, just circular FFT convolutionboundary = fill
: Zero-pad with the fillvalue, same as correlate
does with zeros.boundary = symm
: Tile mirror images of the input to each side This would use ~9 times as much memory(?) but be orders of magnitude faster.
signal.correlate2d apparently uses a brute-force element-by-element convolution, whose computational cost goes with the square of the number of image pixels. I found that it doesn't complete even after many days when applied to a 4096x4096 image to obtain the 2D autocorrelation:
But the autocorrelation can be obtained in less than a minute using np.fft.fft2 and np.fft.ifft2. I suggest that (1) the doc page for correlate2d() describe the method currently used and point out its unsuitability for larger images and (2) if possible in the future, either add an option to utilize an FFT approach for larger images or else point the user to other scipy routines that can handle the task more efficiently.