Open vasild opened 5 years ago
The current design is that the server generates a ping and the client receives it and responds with a pong. The client knows that the server is offline if it does not receive a ping for a certain amount of time and the server knows that the client is offline if it does not receive a pong for that same amount of time. Both sides can detect the lost connection at around the same time using a single ping/pong cycle.
This design was originally inspired from Socket.io.
It's important that the pingInterval
is less than pingTimeout
(ideally less than 1/2).
We can update the documentation to make this more clear. The docs on the website repo are here: https://github.com/SocketCluster/socketcluster-website/tree/master/public/app/views/docs
What about even enforcing that pingInterval < pingTimeout
in the code? E.g. refuse to start the server with some elaborate error message if that is not true, because in this case the connections will keep dropping due to a false "ping timeout". I guess such a misconfiguration should be treated in the same way as setting port
to some negative number (results in "Port should be >= 0 and < 65536. Received -1234.").
The documentation at https://socketcluster.io/#!/docs/api-socketcluster reads:
Usually the timeout starts to tick after the ping request has been sent. In other words - if no reply is received for 20000ms after the ping request is sent, then the connection would be considered broken and would be disconnected.
However that is not the case with socketcluster where
pingTimeout
denotes that at least one ping response should be received within those 20000ms regardless of how many ping requests have been sent during that time, if any.This could lead to the absurd situation of disconnecting due to ping timeout even though not a single ping request has been sent. To observe that set
pingInterval
to a higher number thanpingTimeout
.If changing the behavior is not feasible, then at least an update of the documentation above is warranted to make that clear.