Closed efmkoene closed 2 years ago
Thanks for reporting this @efmkoene. We will take a look (please open a PR if you like!)
I noticed two other differences w.r.t. the original algorithm. The first is that the Hessian should be computed by convolving the image with Gaussian derivatives, while right now the Hessian is computed by convolving the image with a Gaussian filter followed by two finite-difference approximations of a first derivative, which is not entirely accurate. The second is that the factor alpha is automatically set to 1/ndim, but the correct factor ought to be -1/3 for the 2-D case, so the default method is not giving optimal results.
I'm willing to make the changes (e.g., compute the appropriate Hessian using convolutions with Gaussians), but I am worried about making changes that break people's original workflows (i.e., the results will change). How is this dealt with? Do I create a new function meijering_Gaussian(...), or would one simply accept that results will be (slight) different?
Thank you for looking into this @efmkoene!
I'm willing to make the changes (e.g., compute the appropriate Hessian using convolutions with Gaussians), but I am worried about making changes that break people's original workflows (i.e., the results will change). How is this dealt with?
We have a couple of options. In the case where it is truly a bug fix, then we allow breaking with past behavior. If it is instead just a different convention, but the current one is commonly used then we may need to add an additional parameter that would allow selecting between the two implementations. In that case we can initially default to the old behavior, but with a warning that the default will change in a future release.
Unless there is a publication or some other precedent or benefit to having things as they are currently, then I think this can probably be considered a bug fix and we just make the change.
Thanks for your comments @grlee77 , I think that indeed it is a genuine bug fix. However, with the changes made, the results won't change standard situations much, which is good news!
I made a figure to compare the differences (I hope you can click on it to see it in greater detail), and the ridges moving from dark -> light are the features highlighted. The results are roughly similar, but you'll see that certain sharp edges are highlighted in greater detail in the new version. For example, a vertical line on the helmet (around x value 400) is more clearly visible, and the text on the button is more clearly delineated.
The changes shouldn't break any of the old behavior, but please check, as this is my first commit to this project.
scipy.ndimage.gaussian_filter(image,sigma,order=[2, 0])
does not give appropriate results, see https://dsp.stackexchange.com/questions/78280/are-scipy-second-order-gaussian-derivatives-correct . As a fix, I have split up the operation in two repeated calls scipy.ndimage.gaussian_filter(
scipy.ndimage.gaussian_filter(image,np.sqrt(1/2)sigma,order=[1, 0])`,np.sqrt(1/2)sigma,order=[1, 0])`, which do achieve the correct goal.compute_hessian_eigenvalues
function, with an added flag use_Gaussian_derivatives
which defaults to False
, in which case the old version of the code is used (including the scaling of hessian_elements = [(sigma ** 2) * e for e in hessian_elements]
that the original issue started with). The idea is that this piece of code is used in the sato
algorithm too, perhaps there on purpose. In meijering
, however, the flag is now automatically set to the new default of true
. Users of the code can revert to the old behaviour by setting this to False
.alpha
from None
or 1/ndim
to simply correspond to -1/3
. This is the correct value for 2-D data (as reported in the paper), and following the same derivation, it applies to 3-D data too.For \sigma values <~ 1, the scipy.ndimage.gaussian_filter function does not compute appropriate derivatives, at least when setting order=1 or 2. So, I have added a new function within skimage.filters.ridges called gaussian_filter_with_FFT_kernel, which computes a Gaussian in the wavenumber domain, finds its derivative (by multiplying with (i*k)^order) and transforms that back to the space domain. This is only used for \sigma values < 0.9. Above that, the two methods are fairly identical, but I assume the scipy version will be slightly faster, or at least might be the faster option...
This is a good point. I had also thought recently about modifying our skimage.filters.gaussian
to use an FFT-based approach for small sigmas. When reviewing a PR related to SIFT feature detection (#5472), I had looked at a paper on Gaussian scale space implementations where they also recommend using a DFT or DCT-based approach when sigma < 0.8. Shortly after reading that I had also noticed that the DIPlib implementation also uses FFT-based filter in this case.
Very interesting! I decided against a full FFT approach because of the strong wrap-around artifacts (Figure 2 in the linked paper, for \sigma=3 shows it pretty well), though the DCT approach apparently manages to avoid this issue (Figure 3 in the linked paper). My approach for this commit was to sample the FFT-computed kernel (using truncate=100\sigma, so for \sigma=0.5 that would create a filter of length 250+1=101 samples. Figure 5 of the linked paper shows a filter of length 13 and 17 respectively, with the error dropping off quite rapidly, so I'm assuming the error will not be too great). The advantage of the truncated FFT-computed kernel is that people can choose their boundary conditions as they want, in line with the other available filters. With a small truncation, the result is probably terribly aliased however (unavoidable, as a Gaussian is not band-limited anyhow), and probably quite bad.
If I understand it correctly, the FFT approach automatically assumes wrap-around symmetry at the boundaries, while the DCT approach automatically assumes reflection-symmetry at the boundaries. The latter seems to me the desirable option in most cases. But if we/you/me/someone else decides to implement a new Gaussian filter for small sigmas, it seems like we should either fix the boundary condition for the user, or vary the implementation based on what boundary condition the user chooses (FFT, DCT, FFT-based kernel)?
I now realize that the FFT-computed kernel is totally useless! Once I go back to the space domain, I end up with a Gaussian sampled at integer values....which is the exact same function the original scipy.ndimage
approach already uses! The accuracy gain with the FFT-approach is just down to the use of the large truncation value of 100. I feel silly, now! It is then just relevant to use a (much) larger truncation value once \sigma values become small.
So, considerably less changes are needed than what I put in this pull request. The Meijering algorithm should still be updated primarily by computing the Hessian using Gaussian derivatives rather than a finite-difference approach (as it currently is); but the FFT-kernel can be scrapped.
I will update the commit in a few weeks!
I see that the scaling problem has been recently fixed for meijering() in #6149, thanks for that :) However, unless I am mistaken, the exact same problem also applies for the other hessian-based ridge filters (frangi(), sato(), hessian()), right? Either they should also switch to use gaussian derivatives (perhaps by adding a use_gaussian_derivatives option to them too), or at it should be made possible for them to use non-sigma2-scaled hessians?
Edit: This specific point can be subsumed into #6436.
closed by #6446
I'm using skimage.ridges.meijering to detect snake-like features. However, the Python implementation deviates from the paper reference (appendix, https://onlinelibrary.wiley.com/doi/pdf/10.1002/cyto.a.20022 ).
The problematic line is: https://github.com/scikit-image/scikit-image/blob/7023dfb36c6d255cc284ee09992ec28fccb9f838/skimage/filters/ridges.py#L152
It applies a scaling of \sigma^2 to the Hessian eigenvalues. This would be correct if the Hessian matrix missed a factor \sigma^2 also, but I cannot find such a source of error. I cannot easily find a source of error of type \sigma^2 either, as this doesn't correspond to either Gaussian derivatives (which are scaled by 1/\sigma^5 when taking two derivatives), or anything in the theory of the paper. When passing several \sigma values into the Meijering algorithm, it means that larger \sigma values are scaled up, and unduly selected as largest.
The fix would be to remove the quoted line.