alcap-org / AlcapDAQ

Alcap DAQ
alcap-org.github.io
8 stars 0 forks source link

Estimate the MIDAS event length and uncertainty #200

Open benkrikler opened 10 years ago

benkrikler commented 10 years ago

As raised in #110, we need a safe estimate of the MIDAS event length. We can achieve this by looking at the time distributions.

Naively I'd expect the event length to be the same for all channels and all runs, but that may not be the case if the various delays and reaction times vary.

AndrewEdmonds11 commented 10 years ago

100ms. Closing...

Joking aside, I don't understand the problem. We know that MIDAS events are roughly 110 ms and we know our pulses and delays are on much shorter timescales. If each channel did have different length events, would they be so different that it has a huge impact?

jrquirk commented 10 years ago

We don't know when acquisition stops in a digitizer. When the stop acquisition signal gets sent to the BU CAEN, it never sends back a timestamp saying "This is when the measurement window closed." So for the last muSc event, we don't know if there are no hits after it because we were live and there was no physics, or because our acquisition window closed. Now we'll obviously veto it because the rate was high enough, but for lower rate detectors it's trickier because the last event could be a couple of milliseconds from the end or just a microsecond.

Anyway, we'll carefully leave a large buffer before when we think the end of a window is and even then that's not much of the measurement window. The effect is small so maybe it's not important to fix it now, but it should be brought up.

Does this make sense?

AndrewEdmonds11 commented 10 years ago

Yes, I think so. So the real problem is that we might end up missing physics in the lower rate detectors? And at some point we will want to estimate the efficiency of this?

benkrikler commented 10 years ago

It's about both estimating the efficiency of the DAQ + algorithms that create the TMEs and maximising it whilst being confident that we're accounting for pile-up correctly.

litchfld commented 10 years ago

Andy is probably right that it shouldn't be too important, even in the worst case. As I mentioned in #110, we know the that about 0.5% of events are close to the end, and a similar fraction from the beginning. If we were to cut the events at the end and correct the efficiency with a simple toy MC it's probably fine down to a few per mil. In neutrino physics at least, 1% absolute normalisation is pretty good, (although admittedly livetime estimation is not normally the biggest factor).

But we can almost certainly do better by looking at the timing of noise pulses. In noisy channels we can probably get a decent estimate of where the event edges fall on a per-event basis, and use that to evaluate whether it is stable wrt other channels on a per-run (or longer) timescales; assuming stability is dependent only on the electronics type. If it is stable we can accumulate triggers over several runs and work out the envelopes for the quieter channels.