Meteor-Community-Packages / meteor-timesync

NTP-style time synchronization between server and client, and facilities to use server time reactively in Meteor applications.
https://packosphere.com/mizzao/timesync
MIT License
118 stars 36 forks source link

Lower RTT with Meteor.call #35

Open nilnullzip opened 8 years ago

nilnullzip commented 8 years ago

I find that a simple Meteor.call ping runs with about half the RTT of TimeSync. So to meteor.com, I'm seeing something like 150ms vs 300 for TimeSync. Locally, I see 3ms vs 25ms. I think perhaps the difference is due to the WebSocket on the DDP call. In fact if I disable web sockets locally, I see RTT of 35ms with the Meteor.call vs TimeSync's 25ms. All this was on a non-SSL connection.

Are there perhaps other advantages to the WebApp/HTTP method employed by TimeSync?

ping_RTT = new ReactiveVar()
Meteor.setInterval ->
  # ping_RTT.set undefined
  t1 = Date.now()
  Meteor.call 'ping', t1, (e, t)->
    t2 = Date.now()
    ping_RTT.set t2 - t1
mizzao commented 8 years ago

Thanks for looking into this. I was using HTTP.call because I thought it lowered latency, but hadn't considered that it would be happening over normal AJAX instead of WebSocket. The first versions of this library did use Meteor.call. This definitely raises an interesting counterexample.

Would you be able to take some more data samples of the settings you described? I'd be happy to change to using Meteor.call - in fact, it would make things a lot simpler (e.g. #30, #31).

nilnullzip commented 8 years ago

I'll try to get some numbers over SSL.

So a third possibility after HTTP and Meter.call is a raw websocket. The concern I have with DDP is that a single websocket is multiplexed to serve multiple sources. This has to introduce queueing delays. A dedicated websocket would presumably work more freely, with the multiplexing of the channel being done at the OS level.

mizzao commented 8 years ago

Yes, that's right, I remember now. When DDP traffic is heavy the RTT delay can be heavily biased in one direction vs. the other, so the computed offset is inaccurate. As we have it now, there may be a little more latency, as it's not over WS, but it's also not fighting with the rest of the DDP traffic.

A dedicated WS would still be done in the application level - browser/node, not the OS, but might be a little more efficient. I wonder if it will actually be worth it though, if just for this purpose.

nilnullzip commented 8 years ago

The dedicated WS would still have to go through node delay, no worse than HTTP. And no worse than DDP.

However the WS has lower latency vs HTTP because it's not opening a new TCP/IP connection on each request. The connection is already set up.

And the WS should have lower latency than DDP because it's request is not blocked by the DDP queue.

Another thing to consider is that it's only the lowest delay that matters, not the longer ones. The best way to do this is to take multiple samples and choose the one with the shortest RTT. Perhaps you could get lucky with DDP. But probably the websocket is the best approach as it's performance is independent of the DDP queue.