JeffersonLab / coatjava

1 stars 17 forks source link

FCup conversion from counts to charge depends on local time #32

Open raffaelladevita opened 1 year ago

raffaelladevita commented 1 year ago

The conversion of the FCup integrating scaler counts to charge requires the integration time to correct for the FCup offset. The best measurement of the integration time should come from the clock scaler. However, since the frequency is set to 1 MHz, this scaler readout can roll over during a run and is not usable.

As a backup solution, the integration time is currently estimated from the current event Unix time and the run start time in RCDB. Since the latter is a local time, the result depends on the local time of the machine the code runs on.

baltzell commented 1 year ago

Regarding the Faraday cup DSC2 scaler's clock (see issue JeffersonLab/coatjava#27) ... It was indeed 1 MHz during the first clas12 run period, but was fixed and set to an appropriate 100 kHz afterwards, but it also regressed in a later run period if I remember correctly. So, the issue there is that it can't be used uniformly for all data sets just by putting the clock frequency in CCDB (which the software is already ready for), without fixing the rollover.

There is a branch iss967-rollover that does fix the rollover, but that implementation requires reading each, entire run in one shot, which is feasible with train outputs but maybe/probably unfeasible with full DSTs (mainly requires logistics and disk footprint before writing to tape).

One could also think to extract the rollovers (a map of run/event number to number of rollovers during the run), which would need to be done during post-processing or a new upstream job before CLARA where we have access to tag-1 events, and store them (e.g., in CCDB) to apply a fix during reconstruction or post-processing. One technical issue there is that you can't fix file N+1 until all previous N files have been mapped for rollovers.

Or, we could get more clever and efficient and use another timestamp in the data (unix or TI) to calculate when the rollovers must happen and correct for them that way, but I haven't thought about that much. This would also still necessarily include getting the run start time from RCDB.

I think one thing we should do regardless is sample a few files spread across each of the different run periods and measure the clock. The ones with an appropriate clock frequency, which I think/hope should be most of them, can just use the clock, with no rollover issue, and forget about the hack of unix event time minus RCDB start time as a proxy for the scaler clock.

baltzell commented 1 year ago

Regarding the "local time" issue ... That's a misnomer, it's really the locale (timezone and daylight savings stuff), as nothing related in the software is using the "local time". The (rolling over) scaler clock is being substituted by the time duration between a given EVIO event's unix time and the RCDB run start time. It shouldn't matter what locale is used, as long as it's the same for both of those timestamps. There's an additional complication that the Java RCDB library delivers run start time as java.sql.Time, which is only HH:MM:SS, so this could get nasty.

baltzell commented 1 year ago

This might can be closed with #50?

raffaelladevita commented 1 year ago

Yes!

On 18 Jul 2023, at 19:46, Nathan Baltzell @.***> wrote:

This might can be closed with #50 https://github.com/JeffersonLab/coatjava/pull/50?

— Reply to this email directly, view it on GitHub https://github.com/JeffersonLab/coatjava/issues/32#issuecomment-1641133441, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZNEPSYNYVHOWXSVS2WUHLXQ4N5LANCNFSM6AAAAAAYFTGUUQ. You are receiving this because you authored the thread.