cta-observatory / cta-lstchain

LST prototype testbench chain
https://cta-observatory.github.io/cta-lstchain/
BSD 3-Clause "New" or "Revised" License
25 stars 77 forks source link

timestamps 45 seconds off if no initial counters are given #581

Open maxnoe opened 3 years ago

maxnoe commented 3 years ago

Using lstchain_data_r0_to_dl1 yields timestamps off by 45 seconds even for the first subrun with correct ucts information if the counter values are not set.

This is exactly the difference between tai and utc (initial offset of 8 seconds plus 37 leapseconds)

maxnoe commented 3 years ago

If no timestamps are given, this is used:

https://github.com/cta-observatory/cta-lstchain/blob/d234471e3da25f7408571628b2cf5bfe2e8331b3/lstchain/reco/r0_to_dl1.py#L469-L472

I suspect that svc.date is a unix timestamp in utc, not ptp/tai. Hence the offset.

morcuended commented 3 years ago

Yes, you are right. However, the 8-ish second offset slightly changes from run to run as far as we can tell from previous tests. It is not deterministic, so I would say that applying a fixed time offset would not make the trick for all runs.

maxnoe commented 3 years ago

I was also wrong. The 37 seconds are the full current offset, not the number of leap seconds. So 45 seconds is strange and not only explained by UTC vs. TAI.

morcuended commented 3 years ago

I was also wrong. The 37 seconds are the full current offset, not the number of leap seconds. So 45 seconds is strange and not only explained by UTC vs. TAI.

It should be 37 s + timestamp of the first event wrt the start-of-the-run timestamp event.lst.tel[telescope_id].svc.date. The second part is what could be different from run to run. If I remember well we cannot know exactly when the first event is triggered after the run starts.

I remember that the total offset was ~40 seconds (it was larger before ~150 s since NTP server was not well synchronized and two minutes shifted).

I've been checking my old emails about this issue and found this explanation by Dirk Hoffmann:

I just realized that UCTS timestamps are provided in TAI scale (37 seconds ahead of UTC) so I was wondering whether the start-of-the-run NTP time (stored as "date" in the camera_config LST service container) is using UTC scale instead. This could explain most of the time offset we see between UCTS TS and "others TS" (dragon and TIB) which is ~40 secs having still a few secs between the start of the run and trigger of the first event. Does that make sense?

Which scale is using this "date" timestamp? I would say it is UTC, but just to confirm it with you.

The short answer is: Yes, you are right.

The EVB timestamp (as well as the ZFW timestamp) is taken from Unix system times (on machines osaka, okinawa, tcs03 and tcs04 respectively). These system clocks are synchronised to the LST1 NTP timeserver, which in turn is synchronised to a GPS: to the one in the ITC by default (if my information is correct) and as fallback to the one of IAC (and possibly some other fallbacks on La Palma and on the continent).

The GPS time is defined to be TAI minus 19 seconds. Actually the GPS time started in 1980, when it was identical to UTC. But as you know, UTC is adapted to Earth rotation or terrestrial time. Presently UTC is 18" behind GPS, i.e. 37" behind TAI (since the last leap second in the night of Dec/Jan 2016/17).

The GPS navigation message contains information about the current number of UTC leap seconds. Hence GPS receivers can calculate the UTC from it, and I guess that is what they usually do. So also in the case of LST1.

However, I would like to point out again that you should not rely on the EVB nor on the ZFW timestamp for analyses. They are not intended for that at all.