HolyLab / RegisterQD.jl

Image registration with the QuadDIRECT optimization algorithm
Other
13 stars 6 forks source link

The importance of smoothing input images #7

Closed Cody-G closed 5 years ago

Cody-G commented 5 years ago

I'm not opening this issue to propose any changes to the package, but I wanted to post this observation somewhere because I think it's very useful. We've long known that smoothing images before registration (usually with a Gaussian filter) can improve results. I've thought of this as a way to avoid falling into local minima created by noise and/or small-scale features of the image that are unimportant.

I've just realized that there's another good reason to smooth images before registering. It turns out that surprisingly often, the global minimum is the wrong minimum when the input images are not smoothed. I just looked at this carefully when optimizing a simple translation (with qd_translate). This occurs even in images with pretty high SNR. The global minimum tends to occur at regions halfway-between pixels (~0.5, ~1.5, ~2.5, etc). I scratched my head over this for a bit and then realized that it's a natural consequence of interpolation. Suppose that the "true" intensity sampled in each pixel (the intensity that we'd read if there were no noise) is almost equal to the intensity in neighboring pixels. This is the case in many images, because often the "true" intensity just doesn't change that fast from one pixel to the next. (Note that the combination of optical resolution and pixel resolution of an image determines an upper bound on the expected rate-of-change, but it doesn't set a lower bound.).

Getting to the point, the global minima lie near midpoints in the interpolation grid because when we "warp" an image and re-sample it each new (interpolated) value is effectively an average between 2 or more neighboring pixel values, which should always be less noisy than sampling the image at whole-pixel locations.

The easiest way to deal with this problem seems to be to smooth the fixed and moving images so that neighboring pixel values are not dominated by noise. I've been selecting the width of the smoothing kernel empirically; something like a few pixels should be enough but it really depends on the image.

cc @Animadversio @ChantalJuntao

timholy commented 5 years ago

Great analysis. I agree this is quite useful. Perhaps we should add smoothing as an option? (It's a little inefficient to re-smooth fixed for every stack, but perhaps it would be too small an effect to measure?)

Cody-G commented 5 years ago

Smoothing as an option makes sense to me. We could also add two kwargs in case a user wants to avoid smoothing fixed every time: smooth_moving::Bool and smooth_fixed::Bool. Instead maybe call them moving_sigmas and fixed_sigmas, and skip the smoothing if all sigmas are zero for either the fixed or moving image.

The performance hit depends a bit on the image and kernel size. I just benchmarked with this script, and in this case imfilter takes 3x longer than warp. So effectively this adds 3 function evaluations, not too bad. (warp should be a good proxy for an evaluation).

using Images, RegisterDeformation, CoordinateTransformations
using BenchmarkTools

img = rand(600,600,50);
kern = KernelFactors.gaussian((3.0,3.0,3.0));
tfm = Translation(0.5,0.5,0.5);
imgo = OffsetArray(img, -1, -1, -1)

bfilt = @benchmark imfilter(img, kern);
bwarp = @benchmark warp(img, tfm);
timholy commented 5 years ago

Under what circumstances would it make sense to filter fixed and moving with different kernels? If the images are supposed to be "the same" then wouldn't it make sense to filter them in the same way?

Cody-G commented 5 years ago

I think it does make sense to filter them the same way. I only thought of it as a performance enhancement, in case the user wanted to filter fixed on their own and save re-filtering it every time. But now that I think of it in that case the user might-as-well filter both images beforehand and set a sigmas kwarg to all zeros. So we'll add just one keyword arg, a tuple of gaussian widths?

timholy commented 5 years ago

Sounds good. I imagine the only remaining question is whether we want KernelFactors.gaussian or KernelFactors.IIRGaussian. I suppose we could choose the former for sigma < 5 and the latter otherwise. Also allow the user to pass in the actual kernel?

Cody-G commented 5 years ago

I suppose we could choose the former for sigma < 5 and the latter otherwise.

Sounds good to me

Also allow the user to pass in the actual kernel?

My first thought is that if the user is sophisticated enough to design their own kernel then they don't need the keyword arg and they can just preprocess the images themselves. And maybe having two non-orthogonal keyword args could be confusing for less sophisticated usage? But I don't feel that strongly one way or the other; happy to add it if you disagree.

timholy commented 5 years ago

if the user is sophisticated enough to design their own kernel then they don't need the keyword arg

That sounds right. Good plan, best to keep the kwarg options limited & clear.

timholy commented 5 years ago

Another option I just thought of: use quadratic interpolation without prefiltering. The idea is that quadratic B-spline interpolation uses a weighting

val(x) = cm*(x - 0.5)^2/2 + c*(0.75 - x^2/2) + cp*(x + 0.5)^2/2

where -0.5 <= x <= 0.5 (within half a grid point of 0) and cm, c, cp are the coefficients for the data at -1, 0, and 1. Normally we solve a tridiagonal system to ensure that the coefficients cm, c, cp of the interpolation-coefficient array are tuned so that the on-grid values reproduce the observed values---this is essentially "anti-smoothing".

What we could do is:

That way you'd only have to filter fixed and construction of moving would be cheap---just like now, where we assume linear interpolation. All this is implemented in Interpolations.

Cody-G commented 4 years ago

This one slipped by me and I now just noticed it. "Cheap" smoothing by exploiting quadratic interpolation is a cool idea!