Open MoralCode opened 4 years ago
I'm looking at the very same problem and these ideas are interesting approaches, did you succeed in implementing one of them?
No, I haven't looked into this any further since posting this issue.
To me it seems like, if the three solutions, the one I would most likely implement for my project (which only needs to sync time with about 100-200ms of accuracy) is the third one involving updating the sampling as this seems like the best way to get a substantial accuracy improvement without requiring users of this library to add additional headers to their server side code.
I've rewritten the library with a PHP option for millisecond order precision. Please take a look at it and let me know if it addresses your needs.
It seems like the server-side PHP is just responding with the javascript and inserting the server's time (code).
I haven't directly tested the PHP code, but i have been playing around with the new javascript API and It seems way better/cleaner than version 3.X's API.
The main point of this issue is to propose an alternative (or additional) client-side way of improving the accuracy of the server time without needing to change anything on the server by taking into account the timing of the samples being taken to determine when the server ticks to the next second more precisely (rather than just making 10 samples and taking the lowest latency).
I agree that some serverside solution would be needed if anyone was seeking anything on the order of <10ms accuracy, however, for my use, I don't need that much accuracy, within like 200ms would be fine.
Would you like me to submit a pull request to implementing this method of sampling to augment the current sampling method and improve accuracy?
The PHP version is using dynamic imports to request the server's time in milliseconds for each sample instead of using the Date
header.
Your idea is very clever and I welcome a pull request.
In testing this script using the built-in development server for jekyll (
jekyll serve
) running on my local machine, I noticed that the client and server time, being from the same time source, start out being perfectly in sync, but as the clock synchronizes and adjusts itself, the offset is recalculated to be something between 0 and 1000 ms, depending on when the synchronization request was made (in this case -925ms).My current theory as to why this happens is, because the HTTP Date Header does not specify a way to add anything more precise than seconds, the "client" may make the request to the server roughly halfway through a second at say
12:00:00.567
(HH:MM:SS.ms), and the server will respond to the last second with a date of12:00:00
, which the library will interperet as a 567ms time difference and thus adjust the offset.This leads to an example where my local dev server (jekyll) and my browser are running at exactly the same time, but the example code provided in the repository is causing the two to be off by some random value that may be up to one second, while displaying the precision as less than 10 ms.
Some possible ideas for solutions
X-Time-Sync
or something to be used to provide the server date to the library through HTTP headers with more accuracy. For example, the library could check for this header and use it overDate
if it is found. (see #44 )precision
values and by rejecting any samples that are off by less than this amount (1000ms in this case). For example, in the case above, users may see an offset of0
and a precision of+/- 1000ms
Date
data, rather than throwing out all but the lowest-latency response. For example, since the library already records the request and response times of each sampling request, you could use this information to determine more precicely when the seconds on the server tick over and use that to get accuracy more precise than one second.Using this last method, you might get this data (numbers made up by me):
From this you could deduce that the moment at which the server changed from
12:00:00
to12:00:01
must have happened sometime between the sending of the previous sample number 2 (12:00:00.665
) and the receiving of the sample that detected the change (12:00:00.752
). this would give you an absolute worst-case accuracy of+/- 87 ms
(752 - 665). As this is a maximum, you can also improve this accuracy a little more if you make the assumptions that:Then you can use the average time
([sample X response time] - [sample X request time])/2
for the bounds of when the server time could have changed. for example, using this method, the server time ticked over from12:00:00
to12:00:01
between12:00:00.684
(.5ms was rounded up to the nearest ms) and12:00:00.735
(.5ms was rounded up to the nearest ms). This provides a 51ms window during which the servers time could have changed, a decent improvement over the 87 ms worst-case.Of course this method's accuracy depends on the frequency of samples being taken and taking enough samples to "catch" one of these moments where the server's time "ticks" over to the next second.