Closed jhgg closed 7 years ago
Now what if the user loses internet or what if cloudflare becomes evil and decides to terminate the bot's connection to discord forcing you to have to relogin to the bot's account to simulate an reconnect
so that way you can bypass cloudflare crapping on you and so forth without manually having to restart the bot which is NOT ALWAYS POSSIBLE if for some reason I am over 100 miles from my computer to do so. For this you guys should add in some sort of special reconnect endpoint for bot accounts so they can reconnect when cloudflare decides to crap on people with large bots in the first place so they do not spam the IDENTIFY
in the first place. I can tell you for certain that the IDENTIFY
is being spammed all because of cloudflare. It is not always the fact of them testing / terminating the bots. In fact I do this on my bot just to keep it online longer than 2 days. The longest it has been on because of it is 14 days straight.
So, thanks for your understanding. So sorry if my bot spams the IDENTIFY I am not really restarting it, it is because the puny websocket crashes / voice websocket crashing causing the main websocket to crash so then I would have to call the run
method in discord.py which does IDENTIFY
by default because it is the only way to save the bot from terminating in the first place and not lasting (in some cases) not even 3 minutes of being online at all because the websockets python library can crap on you (bug) at times.
...
first of all my internet is fine. Second of All I SAID it can lose it's websocket even when my internet is functioning properly and makes my bot disconnect because of cloudflare in the first place. Which is why it spams the IDENTIFY
as there is no reconnect
endpoint for when cloudflare does do it. That is something similar to IDENTIFY
but is not quite IDENTIFY
.
The Gateway API already has a RESUME message.
Well then I guess the Client class in discord.py needs to add an function named resume that reopens an event loop for asyncio that it closes and then requests that endpoint. At least it would help for those using the run
method that have to hack the event loop back open in the first place using asyncio.new_event_loop()
Not sure your concerns are valid at all. The limit only applies if you reconnect 1000 times in under 24 hours.
Also, intermittent connectivity issues are easily addressable using the resume logic provided by the gateway.
So basically, if I have 100 shards, I can deploy 10 updates/fixes only a day, right? 😛 @jhgg
I'll probably have to separate the shards/client from the rest of the bot. So that I can deploy things without restarting the shards. Maybe using amqp between the two.
That's right @cookkkie. Python has hot code reloading functionality though. I know d.py has cog reloading. You shouldn't need to restart your bot to deploy updates/fixes o;
But, separating the gateway connection from the actual app logic is a legit strategy. I think that's how nightbot works.
Yep I'll probably setup a rabbitmq. The exhange/queues couple seems great for pushing events. I wonder where should I store the state though.
you could probably do everything with redis - for state & brokering
Hmm isn't it possible to log the connection state (and save it to a file) and restart a bot so fast that instead of it doing IDENTIFY
that it could use RESUME
instead? or no?
hmm the only challenge about reloading the stuff is that the cogs cannot be shared in all shards. Making you have to somehow hack reload all the cogs for all the shards somehow (which can be a challenge if you cant be on a server that has every last shard it has.)
Documented as of af57ad3.
We're deploying a change soon that will hopefully automatically reset your bot's token if you're starting way too many sessions within a 24 hour period. Right now - that limit is 1,000 sessions (meaning successfully calling
IDENTIFY
via the websocket -RESUME
s do not count).The reasoning behind this is that due to bad restart scripts/code, bots tend to get into "restart loops" - indefinitely until the bot operator notices. The idea here is to make it so that if your bot gets stuck into a restart loop, you'll have to manually intervene by updating it's token.
When you reach the limit, you'll receive a notification e-mail saying your bot token has been reset. When this happens all active sessions for that bot will be terminated.
This limit is for the given bot application, and is global across all shards. If you are running a large bot across many shards, having a test bot that you test stuff on is recommended rather than testing on your large bot and restarting it.
TODO: