Closed SimonHeybrock closed 1 month ago
The current conclusion is that we will try the functional approach. I think it is going to work. Conceptually this is nice, since it further decouples the data reduction workflow from the polarization analysis: There is no need for the data reduction workflow to be aware of any time or wavelength scales of the polarization equipment. The data reduction will return in event mode, with preserved time and wavelength for each event. These will be used to compute the factors for applying the polarization correction.
* The data reduction workflow must perform normalization, e.g., to monitors. Therefore, monitor data must be averaged over a certain time interval. Events can keep higher time resolution, but given that the monitors cannot, does it even make sense to evaluate the event's transmission at a precise time?
Comment:
We can (and should, I suppose) still perform time-dependent monitor normalization with what I suggested above. But the time scale is a priori independent from anything we do for the polarization correction.
Agreed
Collecting some thoughts after the discussion last week about whether the transmission correction should be represented as a function (that can be evaluated at any precise time) vs. a discretized version (a
scipp.DataArray
).