Closed AndrewEdmonds11 closed 10 years ago
Normalizing to muSc hits automatically accounts for time if the beam hasn't change. I vote for muSc hits.
Was the muSc not moved around a bit from CAENs to FADCs? Otherwise it sounds like a good option to me.
I don't recall, but I don't think so. Either way, physically it's more consistent with the rest of the data.
I don't think it was moved around too much. In any case, I should be able to use TSetupData. I have a gut feeling muSc normalisation might be a bit trickier because I don't think that information is stored anywhere (or at least anywhere that's easily accessible). I'll look into it though
Counting the digitizer hits is tough because without a pulse-height cut or something, you're counting muons, electrons, and all hits on other channels plugged into the BU CAEN. From the DQ documents, it seems you have access to the muSc TDC data. You could keep a running count of that?
That's a good idea, I'll have a look. That raises the question of what the muSc TDC plot should be normalised to. Just the run time?
Just realised that we already have muSc TDC normalised to run time - it's the muSc TDC rate plot.
Also, here's how I propose to implement normalisation. Instead of counting the number of muSc TDC hits in every module we should try and access the value in the TDCCheck module.
The only way I can see that this can be done is to make the TDCCheck_muSc histogram an extern in each module (sorry, Ben) and then, in the module's end or run routine, we get the number of entries and scale the relevant histogram.
I've quickly tested this with MDQ_IslandCounter and run 3200:
I was concerned that we might need to ensure that the TDCCheck module was run first, but because these are the EOR routines then it doesn't make a difference.
If no-one has any major objections to this then I will add normalisation to the following modules:
I don't think it makes any sense to add it to:
All the above is now done on the branch feature/AE_normalise_LLDQ_modules. Should I just "git flow finish" it now or do we need to have an organised way of finishing features?
Sounds fantastic. I'd say finish it.
Done.
Sorry I wasn't thinking. I forgot a potentially relevant point:
Normalising to muSc is fine for distributions (or parts of distributions) that are dominated by real physics. But in distributions that are dominated by noise (e.g. pulse amplitude) it will only work if the beam intensity stays constant. We know there are some runs in the golden sets where the beam intensity is lower (~40%?) so we will see some effects after normalising by MuSc. It's probably true that we will see some variation after normalising by run time too. That variation might be lower, but it depends on the distribution.
So in today's meeting it was suggested that we normalise the histograms to something so that it is easier to see when things are going wrong unepectedly.
The options seem to be:
From the discussion it wasn't clear to me which of these options is best. Does anyone else have a clearer idea?