Open thomasaarholt opened 5 years ago
I've got it down to 150ms with tensorflow on an RTX 2080TI. It takes 3sec to initialise on the first filter, and then 150ms for each new filtering (one new random data). I'm unsure where the first 3 s comes from - if that's the time the GPU takes to "boot up" or equivalent. Script here.
Nice! Yes this function is coded terribly! So the basic thing we're doing in SPmerge01 is "image warping" based on linear drift, and exhaustively testing all the values. But, I use the same image generation code as in SPmerge02 - which can handle arbitrary nonlinear resampling, but is much much slower!
So we should look around for a python code that can apply affine image warps as quickly as possible. In fact that might not even be the best way - some kind of point-based optical flow, which we then fit linear drift to could be better. However this idea could fail for highly periodic images (crystals).
In summary: We should use a different linear image warping method that doesn't require the KDE convolution steps. Also possibly a smarter search algorithm rather than what I do (gridding out the linear drift vectors and testing them all). Then all we need to do is translate from the linear image warping to the SPmerge02 coordinate system we use.
I've written a routine that uses skimage.transform
's AffineTransform
, together with warp
at order=1
(bilinear).
I actually forgot to push it from my home machine, so I'll do that when I get home.
It's reasonably quick, but I haven't benchmarked it properly yet. Alternatives that I haven't investigated yet include pytorch's AffineTransform
and tensorflow's transform
. We can use skimage's AffineTransform to generate the transformation matrices for all approaches.
Do you have any examples of point-based optical flow in the wild?
Hi Thomas,
I don't have any examples. If I get some time I can make a test implementation of point mapping optical flow (in matlab), but I am pretty snowed in right now with work! Let me know if it's important.
Improving on affine warping is important, but your current implementation might be good enough.
For (at least) the SPMerge01 function, the bottleneck lies in the image filtering. In the matlab code, this happens here, and the bottleneck is the convolution step.
The convolution with a gaussian kernel of size 2*r+1 and std KDESigma is implemented quite a bit faster in scipy and opencv.
I've attached a jupyter notebook with my findings when tried on a 4096x4096 image, with a kernel of size 11x11 and std=0.5.
In short, opencv's is fastest at 449ms. Half a second for such an image is still a very long time, since this function is repeated a lot.
Benchmark_Gaussians.zip