Closed sandersaares closed 4 years ago
Closing in favor of ongoing discussion in https://github.com/Dash-Industry-Forum/Guidelines-TimingModel/issues/5
What you are suggesting is impossible in operation and works perfectly only in a lab environment.
In any deployment multiple clocks will inevitably very slowly drift apart. You cannot practically solve this -- you have a large distributed system where the service is de-facto driven by the genlock at the acquisition point at the van/studio, and consumed by multitude of devices with their own slightly different clocks and different time sources and protocols. A SHALL statement prohibiting this scenario is an affront to our credibility as an industry forum.
I would rather say "if you are doing low-latency linear, please use prft
and remember that slow clock drift may accumulate over sufficiently long time". Let the implementers sort it out. They may indeed chose to ignore it, but it's their informed choice, not ours.
If you specify clock synchronization, you must specify a tolerance. Also consider that even if time is tightly synchronized, any two systems may well be in a different integer second around the top of each second.
Clients must be able to expect services to track real time at one second per second. If the encoder clock does not experience 1 second per second, this needs to be fixed at the service side, by adding padding data or cutting data or by fixing the encoder.
Currently 4.8.4 "Encoder Clock Drift" is a bit vague on this. I propose we make this a very explicit SHALL statement.
It also says "the client should parse the segment to obtain this information" which I would say is entirely wrong. The client should expect 1 second to pass per second and never need to deal with the topic.