A fix for a bug that occurred during the response to a PubSub server RECONNECT message. In essence, a single WebSocket connection object spawned 2 new WebSockets both with the same pool index. My understanding is that there should only ever be a single WebSocket for each index in the pool and that reconnection should only produce a single new WebSocket.
I've also updated the pre-commit script with a new version of isort and changed the flake8 repo to GitHub.
INFO Logs
14/07/23 18:49:42 - INFO - [on_message]: #0 - Reconnection required
14/07/23 18:49:42 - INFO - [handle_reconnection]: #0 - Reconnecting to Twitch PubSub server in ~60 seconds
14/07/23 18:50:42 - INFO - [on_close]: #0 - WebSocket closed
14/07/23 18:50:42 - INFO - [handle_reconnection]: #0 - Reconnecting to Twitch PubSub server in ~60 seconds
14/07/23 18:54:28 - INFO - [on_message]: 🚀 +10 → Streamer(***) - Reason: WATCH.
14/07/23 18:54:28 - INFO - [claim_bonus]: 🎁 Claiming the bonus for Streamer(***)!
14/07/23 18:54:28 - INFO - [on_message]: 🚀 +10 → Streamer(***) - Reason: WATCH.
14/07/23 18:54:28 - INFO - [claim_bonus]: 🎁 Claiming the bonus for Streamer(***)!
I was monitoring the info logs for a local docker image and noticed that, after a PubSub RECONNECT message was finished being handled, I was getting 2 logs for each PubSub message. You can see this in the logs above, at 18:54:28 there are 2 logs for each on_message and claim_bonus, and though I've redacted the info they are for the same channel. Looking through the debug logs I could see that I was also getting 2 corresponding points_earned and claim_available PubSub messages each. Additionally, from this point forward, the logs always showed these messages happening in pairs.
After reviewing the code paths I could see the issue was caused by on_close being called during the handle_reconnection flow. This causes an additional call to handle_reconnection and the flow gets run fully again, causing an additional WebSocket to be created and, since they're created on the same index, detaching the 1st new WebSocket from the pool's list. The result is that you end up with 1 new WebSocket in the pool and another new one detached from the pool, but both are operational.
Also, regarding pre-commit, I noticed when making a local commit that the hook failed to run. Turns out some of the packages were outdated so I've fixed that as part of the commit.
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
How Has This Been Tested?
It's kind of difficult to test because it relies on these 2 events happening in quick succession, however, I'm confident this is the cause of the issue. If this issue hasn't come up before, that's probably because it's pretty rare for the server to send a RECONNECT (the docs suggest this happens typically for server maintenance) and even rarer still for on_close to get called in that 60 second window.
Checklist:
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my code
[x] I have commented on my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation (README.md)
[x] My changes generate no new warnings
[x] Any dependent changes have been updated in requirements.txt
Description
A fix for a bug that occurred during the response to a PubSub server
RECONNECT
message. In essence, a single WebSocket connection object spawned 2 new WebSockets both with the same pool index. My understanding is that there should only ever be a single WebSocket for each index in the pool and that reconnection should only produce a single new WebSocket.I've also updated the
pre-commit
script with a new version ofisort
and changed theflake8
repo to GitHub.INFO Logs
DEBUG Logs
I was monitoring the info logs for a local docker image and noticed that, after a PubSub
RECONNECT
message was finished being handled, I was getting 2 logs for each PubSub message. You can see this in the logs above, at 18:54:28 there are 2 logs for eachon_message
andclaim_bonus
, and though I've redacted the info they are for the same channel. Looking through the debug logs I could see that I was also getting 2 correspondingpoints_earned
andclaim_available
PubSub messages each. Additionally, from this point forward, the logs always showed these messages happening in pairs.After reviewing the code paths I could see the issue was caused by
on_close
being called during thehandle_reconnection
flow. This causes an additional call tohandle_reconnection
and the flow gets run fully again, causing an additional WebSocket to be created and, since they're created on the same index, detaching the 1st new WebSocket from the pool's list. The result is that you end up with 1 new WebSocket in the pool and another new one detached from the pool, but both are operational.Also, regarding
pre-commit
, I noticed when making a local commit that the hook failed to run. Turns out some of the packages were outdated so I've fixed that as part of the commit.Type of change
How Has This Been Tested?
It's kind of difficult to test because it relies on these 2 events happening in quick succession, however, I'm confident this is the cause of the issue. If this issue hasn't come up before, that's probably because it's pretty rare for the server to send a
RECONNECT
(the docs suggest this happens typically for server maintenance) and even rarer still foron_close
to get called in that 60 second window.Checklist: