Closed ghost closed 3 years ago
Would this change break my current design where the client resets its own ping interval whenever it receives data from the server? Since idleTimeout wouldn't be reset anymore on the server on send, but my client would still reset its ping interval on incoming data due to assuming the connection is fine and it can skip a ping for the next while.
Would be good to know so there aren't sudden mass undesired disconnects when I upgrade.
Clients are required to respond to (real) WebSocket pings. You are not using "real" pings (opcode ping).
So this change would send real pings and get real pongs (automatically).
Your optimization is that you skip sending a response pong(-equivalence) when you don't have to, but there is still an underlying ACK packet sent back. With "real" WebSocket ping/pongs this would instead send back some data and in case of SSL, encrypted data. This is more demanding since the server needs to decrypt pongs in all cases.
But really, if you already send from the server then receiving is not a big addition.
Maybe we still need yet another option for "resetTimeoutOnSend".
Ah okay thank you, that's what I was wondering about. So just to make sure: If I don't use server pings and my browser client WebSocket receives data in onMessage and I reset my "fake" client ping interval, that's fine as the automatic ACK back to the server would still reset receiveTimeout even with this new change?
Since the ACK is being treated as the client being "not idle" I assume, it just happens a tiny bit later I suppose then if it reset on server send. If so, that's perfectly fine and non-breaking for my app at least. I was mostly unsure what qualifies as "idle".
Changing to only resetting idleTimeout when receiving will bring back finer timeout control
Ya reset on send means you could need to wait 20 seconds to find out if connection is closed from the tcp timeout. You could be wasting 20 seconds sending to bad connection. Reset on receive means for sure the connection is open. Autoping could be sent at the receive timeout and if no response within couple seconds app can close connection. Probably better performance to send a single ping to find out if connection is open than spend 20 seconds sending to bad connection
You mean 20 minutes. That's the real problem
I see default value for tcp_retries2 is 15 minutes so could be sending for 15 minutes before connection close. There are some socket settings in uSockets but nothing for TCP_USER_TIMEOUT or SO_KEEPALIVE which can affect these timeouts. So either add some socket settings or just do the receive timeout in app
Would be great to configure ping interval ourselves and that it's possible to set some low value, hopefully <6 seconds.
Ping is great for maintaining reliable "user is online" state and if ping interval is too big, state can be quite stale.
I have added a new option resetIdleTimeoutOnSend that is true by default. Setting it to false can be a good idea so that only data receives reset timeout. Pair that with the server sending pings and you are there.
autoPing is saved for v19 for now, as you can easily achieve pretty good results with simple alternatives.
This is how I'm going to make it:
Minimal value for idleTimeout will be 8 seconds. At least 4 seconds before the timeout, a ping will be sent. If a pong is not received in 4 seconds it will time out.
Pings are not sent unless they have to - if data has been received the timeout is reset.
This is similar to what we had in v14 but with three improvements:
Pings are sent only if they have to - in v14 they were sent unconditionally.
Pings are sent (quite) uniformly distributed over time - in v14 one single timer sent to all sockets at one single point in time causing spikes.
Pings are sent as late as possible - 4 or 8 or 16 seconds before the timeout - not halfways to the timeout. This means there's higher probability we don't have to send them at all.
I like it!
sendPingsAutomatically is a new option
Some changes to ping/pong are needed:
Currently we reset idleTimeout on a successful send. This is technically okay since the kernel will timeout retransmissions, but problem is the default in Linux is 15 minutes which is quite high. Changing to only resetting idleTimeout when receiving will bring back finer timeout control without needing to tinker with kernel settings.
Some devices / browsers limit CPU-time of client side (such as when tab is in background or when device is in deep sleep). Apparently network interrupts still allow the client CPU-time, so adding support for "autoPing: true/false" as a simple-to-set automatic handling of ping/pong/timeouts from the server side could be a good addition to the current "client-side-ping" solution.
If you still want to use client-side pings, this should work like before but they have to reach the server before its idleTimeout runs out. By setting autoPing to true, the server should manage pings automatically in order to meet whatever idleTimeout is set.
Some (poorly designed) application protocols do not support any kind of client side ping (they would break backwards compatibility or otherwise not function) - here you really have to use autoPing to have the server send pings.
https://github.com/uNetworking/uWebSockets.js/issues/395#issuecomment-736981205