We don't apply the full PSF convolution to match the observed PSF, we only apply a difference kernel from the model PSF to the observed one. Why?
Initially, I though we need to make sure that the morphology image remains well-sampled, so we created an additional PSF of a minimal width to ensure that. I don't think that was incorrect, but I would try to oversample the model image now to avoid that problem. But keeping this small model PSF in place had an important side effect: it made scarlet1 much faster.
What we're doing could be described as deconfusion by deconvolution. By deconvolving we make the source overlap usually less severe, which helps the source separation. However, the deconvolution takes time, and more aggressive deconvolution just eats up a lot of iterations. For Rubin DRP, it doesn't make sense to go down to zero PSF in the model frame (possibly at the extra expense of internal oversampling) just to make the deblender work a little better.
That said, it we wanted to get the actual brightness distribution of the sky (and I do want that), the model PSF is artificial. I wonder if we can push our prior-based deconvolution to zero PSF width, possibly by using the current prior with a finite model PSF, but running on a forward model in the likelihood that assumes zero model PSF (i.e. convolved with the full observation PSF), then retrained the prior. Rinse and repeat.
Benjamin spotted one trick we inherited from scarlet1: https://github.com/pmelchior/scarlet2/blob/51a583df4ad086c541a2379d9f5cbc22657845bc/scarlet2/renderer.py#L74
We don't apply the full PSF convolution to match the observed PSF, we only apply a difference kernel from the model PSF to the observed one. Why?
Initially, I though we need to make sure that the morphology image remains well-sampled, so we created an additional PSF of a minimal width to ensure that. I don't think that was incorrect, but I would try to oversample the model image now to avoid that problem. But keeping this small model PSF in place had an important side effect: it made scarlet1 much faster.
What we're doing could be described as deconfusion by deconvolution. By deconvolving we make the source overlap usually less severe, which helps the source separation. However, the deconvolution takes time, and more aggressive deconvolution just eats up a lot of iterations. For Rubin DRP, it doesn't make sense to go down to zero PSF in the model frame (possibly at the extra expense of internal oversampling) just to make the deblender work a little better.
That said, it we wanted to get the actual brightness distribution of the sky (and I do want that), the model PSF is artificial. I wonder if we can push our prior-based deconvolution to zero PSF width, possibly by using the current prior with a finite model PSF, but running on a forward model in the likelihood that assumes zero model PSF (i.e. convolved with the full observation PSF), then retrained the prior. Rinse and repeat.