Closed olalid closed 3 weeks ago
The response from the server side would be a 429 error here.
It might be so that when a connection error occurs this loop will just restart the negotiation without any delay: https://github.com/baking-bad/pysignalr/blob/b207c6e2602e3501e7f1c023aed93f52814132bd/src/pysignalr/transport/websocket.py#L120 Since the NegotiationTimeout exception is supressed and that is potentially the exception that is generated in this case it will not fall back out to any outer error handling, but stay in this loop indefinitely. See also where the exception is generated: https://github.com/baking-bad/pysignalr/blob/b207c6e2602e3501e7f1c023aed93f52814132bd/src/pysignalr/transport/websocket.py#L145
Would be great if someone who is familiar with how the lib works would comment on this...
After reviewing the code in more detail, my conclusion is that it is probably not the negotiation process that is the issue, it is the websockets connection that probably is restarted without any delay if the socket is closed for some (unrelated) reason. Since there is an overlayed iterator on the websockets client that largely ignores the InvalidStatusCode exception that is issued by the lib apart from for 404 error which is reissued as NegotiationTimeout which is in turn suppressed by the outer loop in the code. So InvalidStatusCode, including a 429 error, would just be ignored...? Websockets lib generates the InvalidStatusCode exception for all http return codes apart from 301, 302, 303, 307, 308 or 101. I am not sure what the reason to try to "hide" these exceptions are, it would probably be better to let them fall through and let user code do something useful instead?
Closing this in favour of this more correct bug report: https://github.com/baking-bad/pysignalr/issues/20
We are using this lib with https://github.com/nordicopen/pyeasee/tree/master/pyeasee which in turn is used by a HA component (i.e. home automation system) https://github.com/nordicopen/easee_hass. We previously used another signalr lib that is really old and has requirements that are not anymore suitable, so we changed to this lib earlier this year.
Now we are getting reports from Easee (who owns the cloud we connect to) that we are generating excessive traffic to their servers during connection process. Which causes their firewall to react and rate-limit.
In our code, we have a newer ending connection loop that has an increasing delay to prevent excessive traffic in a situation where the server would refuse connection: https://github.com/nordicopen/pyeasee/blob/537d865d9671470b6bdbf03c3eced32f6ef91221/pyeasee/easee.py#L296
But I am not sure what happens inside this lib when this happens, does it have an internal retry logic? If so, can we control how it behaves somehow?