Closed pmelchior closed 3 years ago
I should mention that the Renderer
class implements a parameterized transformation (from model to observation), as well as its inverse (to construct an estimate of the deconvolved data). It needs to be parameterized for those cases where we don't truse the mapping and want to fits for e.g. the PSF model or the WCS model of the observation.
In other words, the possible optimization parameters of an Observation are contained in the renderer. This can include e.g. #212 if we don't trust the photometric calibration.
I like this idea and think that it's a good path forward, but I'm not crazy about the name Renderer
. Maybe just Render
or ModelRendering
? I'm also open to other ideas.
If you use pycharm (or a similar IDE) it will greatly simplify the refactoring process, as it can find all instances of the refactored classes/functions and change the code so that it still works exactly the same with a lot less debugging.
Good tip re pycharm.
As for Renderer
, it's the class that creates the render method, not the method itself. Hence the proposed name. I'm open for alternatives, but it's not that important after all.
I guess the renderer assumes that the content of the match is moved to the frame making this decision as to how to generate diff kernels, which is ok.
I like making observations frames, I think we used to have that and it makes sense.
I'm open for alternatives, but it's not that important after all Agreed, the name isn't that important.
I also agree that Renderer
is more descriptive, it just sounds funny to me. In other programming languages Render
would just be a mixin that could be added to the class to give it the necessary methods to do the rendering, but alas python does not make use of mixins. Maybe call it the Rendering
class? I admit that I don't have a better solution.
There are some shortcomings in the current treatment of multi-Observation modeling. Are primary example is that a user has to decide what kind of Observation they are specifying (e.g
Observation
vsLowResObservation
), and we code to perform such a conversion if needed. This is silly.I propose a few changes, which simplify the interface and make the development more modular, which we'll need for adding different types of observations.
Renderer
, which will automatically be instantiated by callingObservation.match(model_frame)
. The advantage is that we can replace the renderer without changing the type of the observation (so, no more LowRes...). The renderer are responsible for the mapping between model pixels and observed pixels. This approach will work for any data that is essentially a bunch of images with Gaussian noise on them (including 1D or 2D spectra, grism). They all use the same log likelihood code, etc. So, when do we need a different type of Observation? E.g. for photon counters (X-ray, gamma-ray) or weirder things (MKIDS, compressed sensing, [insert other scify here]).Observation
derives fromFrame
. So, observation is a frame with a data payload and and log likelihood function. This allows for shorter writing of the many places in the code, where we currently write e.g.obs.frame.wcs
...Observation
payloads are calleddata
instead ofimages
(because it could be a spectrum), and singularpsf
instead ofpsfs
, etc (because all of them are data cubes). We currently have an inconsistency thatObservation
is called withpsfs
, which setsFrame.psf
(an instance ofPSF
).Let me know what you think.