NodeGuy / server-date

Make the server's clock available in the browser.
http://www.nodeguy.com/serverdate/
Mozilla Public License 2.0
193 stars 53 forks source link

Improve accuracy of server time estimates using HTTP Date header #46

Open MoralCode opened 3 years ago

MoralCode commented 3 years ago

This PR fixes #41. it is not quite ready for merging as it still likely needs a few more cleanup/release-prep type things (although these may depends on @NodeGuy's discretion as to whether they are actually necessary):

Leaving it as a draft PR now so people can poke at it and play with it before its 100% ready to release

simonbuehler commented 3 years ago

hi,

tried it at a local docker server with slow response time ( xdebug on, etc) and i notices that only two samples were collected as the waiting time for the server response was ~2,4 seconds, this lead to a seemingly correct offset but with a uncertainty of -2694, i guess the idea is that the server response time must be lower than a second?

I changed the call from window.location to something static like window.location + '/robots.txt' and the server feedback was < 10ms which led to ~10 calls and a uncertainty of -65, yay! So maybe this should be noted that its better to not trigger a (slow) boot of the whole severside request stack (laravel in my case) for / but something static served like a text or image file.

Now works for me, nice work! 👍

MoralCode commented 3 years ago

yeah i think its somewhat dependent on response time since i tried to make as few assumptions as i could (i.e. not assuming that the request is processed in the middle of the response time and the transmission time is the 1/2 of that on either end) so most of the calculations should be using fairly conservative estimates (i explained this in my writeup for #41).

This is mostly just a first pass/best-effort version with one "pass" at sampling (i.e. it samples until it detects the server tick to the next second and then uses that). i think depending on the server speed you could possibly do some more advanced things like using the guess at the server time and latency from the first sample to try to time "second pass" samples to see if you can send two simultaneous requests with a small determined delay at the right time to try "catch" the server ticking over rather than sending the next request when the last one comes back. I think this could yield even tighter precisions for high performing servers but its pretty overkill for my planned usecase.

i definitely like the suggestion of using a static file to get around this. feel free to submit a PR to my fork if you want to add this to the docs or something and i can merge it into my branch so it appears in this PR