Closed geerlingguy closed 7 years ago
So the old code closed the connection as soon as the process ran - and then tried a reconnect when the process failed. The problem with that was that every iteration through the while loop (depending on the beanstalkd timeouts) was closing and reopening the DB connection - which was causing issues further down the pipeline with MySQL when you scaled out the workers. This also happened whenever a job finished. So if you had dozens of really fast jobs in there (say you're sending a newsletter email to 200 people) you end up blocking the connections to MySQL. Basically it caused the main magento stack to start shitting itself.
The most recent change was watching for these disconnect events, throwing an exception for them - and reconnecting based on that exception event. The idea there being that instead of constantly flipping the MySQL connection - it's only being flipped when absolutely required.
Additionally there was an issue where Magento would periodiclaly think that a connection was still open even though it had dropped - so the explicit closeDbConnection
logic was put in to reset the Magento db state.
The exception is logged either way - so it the process failing, or is it just filling up the exception logs and proceeding as expected? If it's dropping the mysql connection completely - the solution could be to adjust how it's catching the "went away" exception and triggering the reconnect logic.
I hope this helps
It seems like after observing a few threads that I ran seperately, the code is doing what it should—after wait_timeout is reached, I see an exception in the log, and I see in the database there’s a new connection set up (using SHOW FULL PROCESSLIST;
).
Since we have a number of workers per queue, and our wait_timeout is 3600 sec/60 minutes, there are a number of these exceptions logged.
Do you recommend setting a much higher wait_timeout, or are the exceptions mostly harmless in your experience?
Unless i'm misunderstanding - it sounds like it's doing precisely what is expected: IE: Connecting, processing, then when it dies - reconnecting.
Extending the wait_timeout on the MySQL side in my experience is not the best idea, because most of your DB load (at scale) will be your customer connections - and a longer wait_timeout will result in your connection count staying at an unreasonably high level - requiring more resources thrown at your SQL cluster (or increase the kernel ULimit - which has other concerns to it). EDIT - The connection count will stay open longer if there are bugs in the code - flawless code won't cause the issue - but in my experience there's not really such a thing as "flawless" magento coce
That error is an expected error - because the connection is potentially idling for a long time during processing tasks and waiting for queue items and such. Which is why it's meant to recover "invisibly" from it (but, as i say, it does log the error).
Yeah, I did some more testing with the queue watchers today, and can confirm that when I start them all at once, I can see zero of the 'gone away' errors for the full wait_timeout
period (60m in my case), and after that, I start seeing individual workers throwing the 'gone away' exception here and there (as some queues are idle that whole time, while others get jobs to process and so aren't idle and don't necessarily hit wait_timeout
until they're idle later).
I'm going to close this issue, as I think it's more of a 'known non-bug'—I wonder though if there's any way it could be handled more gracefully, as any persistent queue watcher that's not restarted regularly would run into wait_timeout at some point (assuming the queue doesn't get any jobs within that period of time). Though with MySQL's default of 28800
(8 hours), it's likely that most queues would have a job within any given 8 hour window (more so than with a 1 hour window), so wouldn't run into the exception as often.
For my site, I have multiple queue workers using Beanstalkd as a queue backend, and I start them with
shell/queue.php --watch [queue-name]
.These queues seem to work fine, but after some time (I'm guessing after MySQL's
wait_timeout
has been reached on the database connection), there's an exception logged:It seems that there was a fix for this issue in 2014 (see these lines specifically, moving the db connection handling outside of the
while()
loop: https://github.com/lilmuckers/magento-lilmuckers_queue/commit/715c75a786e918024a6b97bb6fdf1eea350ce315#diff-06ed6044f3f923b9c36f7a31eb80e753R180), which was part of the issue: https://github.com/lilmuckers/magento-lilmuckers_queue/issues/5The code for the db connection handling seems to have moved back inside the
while()
handler since the fix was applied (see: https://github.com/lilmuckers/magento-lilmuckers_queue/blob/master/shell/queue.php#L183-L210), but I'm wondering if that was an accidental reversion?Or might I just be reading the new code wrong?