Open hugobuddel opened 3 months ago
The Nyquist/sampling theorem doesn't like such small PSFs - the PSF convolution goes through Fourier space where the high frequencies attached to the narrow PSF are messed up. We had a similar situation in spectroscopy where people wanted to observe very narrow lamp lines. There, we could get away with pre-smoothing the input spectrum with a Gaussian that just about fulfilled the sampling theorem, because that was still negligible compared to the slit function. In your case this is not possible. One expensive possibility would be to first use a finer pixel grid where the PSF is properly sampled and then integrate (!) to the final pixel scale. Maybe the way to go is not to apply a PSF effect at all, but prepare a source image such that the flux of unresolved sources is distributed appropriately over a few pixels according to the subpixel position of the source. Or is that just deflecting the issue?
I can't find an issue on the spectroscopy case, it was probably discussed on Slack or by email. The solution was applied outside Scopesim anyway.
Oh that would explain it, I didn't realize the PSF convolution was done in Fourier space, makes sense, thanks. I just guessed, didn't yet investigate.
I think there would be some simple options to solve this. For now we just made the PSF larger artificially. Which has as added benefit that the stars are better visible in Jupiter notebooks. Because if the stars are only a single pixel they seem to disappear when matplotlib zooms out (due to the way matplotlib interpolates I guess).
There is some code in #403 that applies the PSF by simply looping over all the pixels, so without going to Fourier space. Maybe we can include that functionality as an option in the PSF base class. And then use that when either the PSF is too small or when the PSF is field-varying.
Copy of the code:
@njit(parallel=True)
def _convolve2d_varying_kernel(image: npt.NDArray,
kernel_grid: npt.NDArray,
coordinates: Tuple[npt.NDArray, npt.NDArray],
interpolator) -> npt.NDArray:
"""(Helper) Convolve an image with a spatially-varying kernel by interpolating a discrete kernel grid.
Numba JIT function for performing the convolution of an image with a spatially-varying kernel by interpolation of a
kernel grid at each pixel position. Check `convolve2d_varying_kernel` for more information.
Parameters
----------
image: npt.NDArray
The image to be convolved.
kernel_grid : npt.NDArray
An array with shape `(M, N, I, J)` defining an `MxN` grid of 2D kernels.
coordinates : Tuple[npt.ArrayLike, npt.ArrayLike]
A tuple of arrays defining the axis coordinates of each pixel of the image in the kernel grid coordinated in
which the kernel is to be computed.
interpolator
A Numba njit'ted function that performs the interpolation. It's signature should be
`(kernel_grid: npt.NDArray, position: npt.NDArray, check_bounds: bool) -> npt.NDArray`.
Returns
-------
npt.NDArray
The image convolved with the kernel grid interpolated at each pixel.
"""
# [JA] TODO: Allow for kernel center != kernel.shape // 2
# Get image, grid and kernel dimensions
img_i, img_j = image.shape
grid_i, grid_j, kernel_i, kernel_j = kernel_grid.shape
# Add padding to the image (note: Numba doesn't support np.pad)
kernel_ci, kernel_cj = kernel_i // 2, kernel_j // 2
padded_img = np.zeros((img_i + kernel_i - 1, img_j + kernel_j - 1), dtype=image.dtype)
padded_img[kernel_ci:kernel_ci + img_i, kernel_cj:kernel_cj + img_j] = image
# Create output array
output = np.zeros_like(padded_img)
# Compute kernel and convolve for each pixel
for i in prange(img_i):
x = coordinates[0][i]
for j in range(img_j):
pixel_value = image[i, j]
if pixel_value != 0:
y = coordinates[1][j]
# Get kernel for current pixel
position = np.array((x, y))
kernel = interpolator(kernel_grid=kernel_grid,
position=position)
# Apply to image
tmp = np.zeros_like(padded_img)
start_i, start_j = i, j
stop_i, stop_j = start_i + kernel_i, start_j + kernel_j
tmp[start_i:stop_i, start_j:stop_j] += pixel_value * kernel
tmp[start_i:stop_i, start_j:stop_j] = pixel_value * kernel
output += tmp
return output[kernel_ci:kernel_ci + img_i, kernel_cj:kernel_cj + img_j]
Hmm, that seems a bit brute-force-ish tbh. I guess there's no good way to vectorize that loop (i.e. numpy)? Now with Numba I suppose it "doesn't matter", but that would mean introducing another major dependency to ScopeSim, which at the very least should be well discussed (over at #403 probably...).
It would be good to have the brute force option available, even if it is slow, because in some scenario's (e.g. very small PSF) it is the correct one.
Didn't intend to close this...
The PSF can become very small when simulating wide field imagers. In DREAMS (irdb PR forthcoming), the FWHM of the PSF is about the size of a pixel. Simulating a star with an analytical PSF will result in a flux that is too low when the PSF is so small.
My guess is that the PSF is sampled at one (or a few) positions for each pixel, which will lead to an underestimate of the flux, because it will probably sample the wing. It could also overestimate the flux when the sampling happens to be exactly at the peak of the PSF.
A proper test needs to be created for this.