NodeGuy / server-date

Make the server's clock available in the browser.
http://www.nodeguy.com/serverdate/
Mozilla Public License 2.0
193 stars 53 forks source link

Offsets may be off by up to a second #41

Open MoralCode opened 4 years ago

MoralCode commented 4 years ago

In testing this script using the built-in development server for jekyll (jekyll serve) running on my local machine, I noticed that the client and server time, being from the same time source, start out being perfectly in sync, but as the clock synchronizes and adjusts itself, the offset is recalculated to be something between 0 and 1000 ms, depending on when the synchronization request was made (in this case -925ms).

Screenshot_20200709_092959

My current theory as to why this happens is, because the HTTP Date Header does not specify a way to add anything more precise than seconds, the "client" may make the request to the server roughly halfway through a second at say 12:00:00.567 (HH:MM:SS.ms), and the server will respond to the last second with a date of 12:00:00, which the library will interperet as a 567ms time difference and thus adjust the offset.

This leads to an example where my local dev server (jekyll) and my browser are running at exactly the same time, but the example code provided in the repository is causing the two to be off by some random value that may be up to one second, while displaying the precision as less than 10 ms.

Some possible ideas for solutions

Using this last method, you might get this data (numbers made up by me):

Sample 0: Request time is 12:00:00.567. Response time is 12:00:00.613. Server Time is 12:00:00.
Sample 1: Request time is 12:00:00.620. Response time is 12:00:00.650. Server Time is 12:00:00.
Sample 2: Request time is 12:00:00.665. Response time is 12:00:00.702. Server Time is 12:00:00.
Sample 3: Request time is 12:00:00.717. Response time is 12:00:00.752. Server Time is 12:00:01.
Sample 4: Request time is 12:00:00.769. Response time is 12:00:00.811. Server Time is 12:00:01.
Sample 5: Request time is 12:00:00.821. Response time is 12:00:00.867. Server Time is 12:00:01.
Sample 6: Request time is 12:00:00.873. Response time is 12:00:00.919. Server Time is 12:00:01.
Sample 7: Request time is 12:00:00.925. Response time is 12:00:00.961. Server Time is 12:00:01.
Sample 8: Request time is 12:00:00.977. Response time is 12:00:00.998. Server Time is 12:00:01.
Sample 9: Request time is 12:00:01.029. Response time is 12:00:01.011. Server Time is 12:00:01.

From this you could deduce that the moment at which the server changed from 12:00:00 to 12:00:01 must have happened sometime between the sending of the previous sample number 2 (12:00:00.665) and the receiving of the sample that detected the change (12:00:00.752). this would give you an absolute worst-case accuracy of +/- 87 ms (752 - 665). As this is a maximum, you can also improve this accuracy a little more if you make the assumptions that:

Then you can use the average time ([sample X response time] - [sample X request time])/2 for the bounds of when the server time could have changed. for example, using this method, the server time ticked over from 12:00:00 to 12:00:01 between 12:00:00.684 (.5ms was rounded up to the nearest ms) and 12:00:00.735 (.5ms was rounded up to the nearest ms). This provides a 51ms window during which the servers time could have changed, a decent improvement over the 87 ms worst-case.

Of course this method's accuracy depends on the frequency of samples being taken and taking enough samples to "catch" one of these moments where the server's time "ticks" over to the next second.

simonbuehler commented 3 years ago

I'm looking at the very same problem and these ideas are interesting approaches, did you succeed in implementing one of them?

MoralCode commented 3 years ago

No, I haven't looked into this any further since posting this issue.

To me it seems like, if the three solutions, the one I would most likely implement for my project (which only needs to sync time with about 100-200ms of accuracy) is the third one involving updating the sampling as this seems like the best way to get a substantial accuracy improvement without requiring users of this library to add additional headers to their server side code.

NodeGuy commented 3 years ago

I've rewritten the library with a PHP option for millisecond order precision. Please take a look at it and let me know if it addresses your needs.

MoralCode commented 3 years ago

It seems like the server-side PHP is just responding with the javascript and inserting the server's time (code).

I haven't directly tested the PHP code, but i have been playing around with the new javascript API and It seems way better/cleaner than version 3.X's API.

The main point of this issue is to propose an alternative (or additional) client-side way of improving the accuracy of the server time without needing to change anything on the server by taking into account the timing of the samples being taken to determine when the server ticks to the next second more precisely (rather than just making 10 samples and taking the lowest latency).

I agree that some serverside solution would be needed if anyone was seeking anything on the order of <10ms accuracy, however, for my use, I don't need that much accuracy, within like 200ms would be fine.

Would you like me to submit a pull request to implementing this method of sampling to augment the current sampling method and improve accuracy?

NodeGuy commented 3 years ago

The PHP version is using dynamic imports to request the server's time in milliseconds for each sample instead of using the Date header.

Your idea is very clever and I welcome a pull request.