Closed SimonHeybrock closed 3 months ago
I think the dependency gets even worse if we compute the beam center from I(Q) in 4 quadrants of the detector panel, right?
Not really, because that version of the BC-finder creates its own workflow from scratch (because otherwise there is a chicken-egg problem).
Currently the SANS workflow can be default compute
BeamCenter
by looking at the center of mass of the sample data. While this is nice, the scientists have previously indicated that this is not how things operate in practice, instead the beam center would be computed once, then reused. The reason I am bringing this up is different though, as explained in the following.The current approach adds a chain of dependencies in the workflow graph which I fear will cause more trouble in the long run. In simplified terms, the dependency is:
BeamCenter->CalibratedData->SolidAngle
— solid angle computation needs sample positions and detector positions, which we have after setting the beam center.CalibratedData
) is done after loading event data, even if the beam center is actually set to a fixed value.DetectorMasks->MaskedData->CalibratedData->DetectorMasks
).The full dependency graph looks as follows (after #153, which made the problem more obvious):
An example where this becomes more problematic is chunking processing of files as well as live reduction: The beam center would be set once, but with the current workflow graph we would have to re-compute
SolidAngle
for every chunk. Even when only a single chunk is processed, the graph-level parallelism of the current approach is worse than it has to be.Another potential example is the TOF computation: It might be that we need the calibrated positions for precise frame-unwrapping, which is currently not possible, since calibration happens after defining TOF.
I therefore propose to re-structure this:
This would look as follows:
Here
SolidAngle
does not contain pixel masks yet, we would probably addMaskedSolidAngle
which inserts these. At first I was skeptical about this, since I felt that this would risk "forgetting" to mask the solid angle, which would yield a totally wrong normalization term. However, I now think that it will actually become more obvious that we use/need masks in this term by having the explicitMaskedSolidAngle
. Previously this was kind of implicit and only obvious for an informed reader of the workflow graph (sinceSolidAngle
used to depend onCalibratedMaskedData
, not the "Masked").Also note #62 which indicates that the current approach may be flawed anyway.