Closed andykee closed 3 years ago
The following snippet shows that the current implementation is correct:
import numpy as np
import lentil
a = np.zeros((11,11))
a[5,5] = 1
scale = 2
b = lentil.convolvable.jitter(a, scale)
def fwhm(arr):
arr = arr / np.max(arr)
arr[arr < 0.5] = 0
extents = lentil.util.boundary(arr)
return np.max([extents[1] - extents[0], extents[3] - extents[2]])
# The relationship between FWHM and sigma (scale in our case) is
# given by FWHM = 2*sqrt(2*ln(2))*sigma
# (https://en.wikipedia.org/wiki/Full_width_at_half_maximum)
print(f'Calculated: {2*np.sqrt(2*np.log(2))*scale}')
print(f'Observed: {fwhm(b)}')
Calculated: 4.709640090061899 Observed: 4
The match isn't perfect, but it's certainly not off by a factor of 2. I suspect the results would be closer in an oversampled case.
Writing a unit test for
convolvable.Jitter
uncovered an interesting question about what thescale
term should actually represent when applying jitter to an image.The code uses
scale
as the standard deviation of the gaussian kernel. This behavior is exactly as described in the docs. Practically what this means is that if we ask for 2 pixels of jitter, what we get back is actually 2 pixels of jitter in every direction, or what looks more like 4-5 pixels.My gut feeling says that we should be dividing
scale
by 2 when computing the kernel, but I have to dig through some references to verify.Whether the code ends up changing or not, the docstring should be updated to make it a little more clear how
scale
is used internally and what a user should expect to see.