The previous logic caused an exception to be thrown every 30s by default, so we could issue a pong to insure the connection doesn't timeout with the webserver. This exception always took place, which means we were abusing the wait_for to always trigger a timeout, and means that there are more performant alternatives to exception raising.
Instead I create a task, and sleep for a time based on the last message received time and the timeout value. If after waking up we still haven't received a message we issue the pong and reset the last message received time.
This will result in the driver sending less pongs and being a little more performant from not raising the exceptions.
There is still the chance that a blocking requests API call or user code blocking the eventloop causes the websocket to timeout, but that behavior existed before. With the AsyncDriver/AsyncClient pull requests only user code blocking the eventloop can cause a websocket timeout.
The previous logic caused an exception to be thrown every 30s by default, so we could issue a pong to insure the connection doesn't timeout with the webserver. This exception always took place, which means we were abusing the wait_for to always trigger a timeout, and means that there are more performant alternatives to exception raising.
Instead I create a task, and sleep for a time based on the last message received time and the timeout value. If after waking up we still haven't received a message we issue the pong and reset the last message received time.
This will result in the driver sending less pongs and being a little more performant from not raising the exceptions.
There is still the chance that a blocking requests API call or user code blocking the eventloop causes the websocket to timeout, but that behavior existed before. With the AsyncDriver/AsyncClient pull requests only user code blocking the eventloop can cause a websocket timeout.