pmelchior / scarlet2

Scarlet, all new and shiny
MIT License
13 stars 2 forks source link

multi-observation inits #73

Open pmelchior opened 2 weeks ago

pmelchior commented 2 weeks ago

None of the init methods were tested for multi-observation scenarios. The main reason is that it's not clear at all how that should be done. For instance, spectrum = init.pixel_spectrum(observations, center) can imply a number of things that are supposed to happen:

  1. observations contain different channels, and the resulting spectrum gets a concatenation in the order of observations. However, this may not be the order of channels in the model frame. In this mode, we should read the model frame channels and associate the observed channel with the model channel accordingly.
  2. observations contain (at least some) identical channels. Should we average the results? Maybe.

So, we could improve this by implementing a channel check as in item 1 and maybe a warning in case 2.

The situation gets more complicated for adaptive_morphology. The method finds the largest bounding box around center to contain light that has the same color as the center pixel. It then measures the moments of the image in that box to create a Gaussian with those moments. If the placement or resolution of the observations is different, we have to make adjustments to how this Gaussian moments are combined. In particular, we can take advantage of DEIMOS (#65), which means we have deconvolved moments, so the effects of different PSFs are already taken out. What's missing is that different resolutions means a different numbers of pixel for an objects with the same physical size. But that correction is also analytic: the correct factor is $(s\mathrm{frame} / s\mathrm{obs})^p$ for the $p$-th moment. So, we can combine the moments across different observations.

Originally posted by @pmelchior in https://github.com/pmelchior/scarlet2/issues/72#issuecomment-2221127686

pmelchior commented 2 weeks ago

The pixel spectrum is straightforward for multi-observations, but the adaptive morphology is trickier. What we basically attempt to do is to determine the moments of the observed images, but in the frame of the model. That has two different steps:

  1. Determine the size of the box in the observations that contain (most of) the source but little else (background, neighbors). It's not clear initially if the boxes should be the same across different observations or be allowed to adjust independently. The second one is more modular, but the first one may be more robust, especially to detect the presence of neighbors.
  2. Measure the moments of the source in those boxes. When the frames are different, one has require sky coordinates for the center around which the moments are computed, which also ensures that multiple observations use the same center for that computation. But the boxes cannot be the same in general (for instance if the observations are shifted or rotated wrt each other), unless they are the same in model frame pixels and then somehow transformed into potentially complex image masks for each observation. Having independent boxes from the 1st step would be easier here. Then we can measure the moments, deconvolve them from the PSF, and correct the moments for both change of pixel resolution (as written above) and orientation. The beauty of this approach is that all of these moment-space operations are analytic and don't involve any image processing.

Given item 2, it sounds best to me to treat the moment measurement (including the definition of the adaptive box) independently across the observations and only combine the moments afterwards for the final estimate of the Gaussian shape.