Open abw opened 1 year ago
Some action items for me:
_closing
Instead of pool you could listen for 'error' event when server closes its side of connection ( https://github.com/sidorares/node-mysql2/blob/6bc64442b402b420145781057c4aebe9618be246/lib/connection.js#L120 ) and create a new connection
Also double check enableKeepAlive
flag is set, otherwise client tcp connection might be unaware of other side being closed - see discussion in https://github.com/sidorares/node-mysql2/issues/683
And thanks a lot for the repo and readme @abw - much appreciated!
MySQL server uses TCP protocol to interact with its client. So, when client does not interact with the server long period of time(see WAIT_TIMEOUT config of MySQL, you can change it), the server can decide to disconnect their clients (see WAIT_TIMEOUT config of MySQL). But to be absolutely sure that MySQL does not disconnect it's client you need to enable keepAlive option on TCP connection from client side.
But I do not recommend having long running connection in production. It would be better to close them automatically from time to time if they are in idle.
So, this set of options for the mysql2 connection pool should help you to make 'balanced' communication in your specific case with your MySQL server:
maxIdle: 0,
idleTimeout: 60000,
enableKeepAlive: true,
The main idea behind this configuration is the idleTimeout
should much-much less than MySQL server side option WAIT_TIMEOUT
. So, with this idea, all idle connections in pool will be closed from client side automatically by connection pool before MySQL server will decide close them from server side.
And, keepAliveInitialDelay
should have default value (i.e. much-much less than idleTimeout
). It will allow client to send keep-alive packets during idleTimeout
until connection closing from client side.
@komanton I'm curious -- why set keepAliveInitialDelay
to 30 minutes? In your example you indicate this is "CloudRun idle * 2", does that indicate your MySQL Server's WAIT_TIMEOUT is 15 minutes?
My understanding is keepAliveInitialDelay
sets how long to wait after the last packet has been sent before starting to send keepAlive packets. Here's NodeJS' docs on the topic.
Since your idleTimeout
is well below that, I'd expect the client side idle logic to drop the connection well before the keepAlive packets even start firing. Though that sounds like what you want -- the client should gracefully drop idle connections prior to MySQL doing it ungracefully. Surely that's just ensuring idleTimeout < server WAIT_TIMEOUT?
@abentpole You are right. It was my mistake about setting keepAliveInitialDelay
to 30 min. I have updated my comment to fix it according to information from your comment. Shortly say, we need set default value for keepAliveInitialDelay
. Probably, I will have a chance to check it in our project soon.
Any update on this issue?
Hi, we investigated for a while this problem on our production systems, and we may have a fix which we thought we should share. The pool configuration contains the following options:
{
// Keep alive packets should be sent
enableKeepAlive: true,
// We should start sending them early
keepAliveInitialDelay: 3 * 1000, // 3 seconds
// We don't want idle connections, but it's not mandatory for the fix to work, it seems
maxIdle: 0,
// Idle timeout much larger than keep alive delay and much smaller than MySQL's timeout setting
idleTimeout: 5 * 60 * 1000 // 5 minutes
}
When creating the pool, we place a handler for errors on connections we create:
connectionPool = await createPool(configuration)
connectionPool.on('connection', (connection) => {
connection.on('error', (error) => {
// We log warning for watching stats, this is not a critical error
logger.warning('Error received on connection. Will destroy.', { error })
connection.destroy()
})
})
Hope it helps!
@JustBeYou interesting, the pool already does this with the exception that it only removes connection from the pool in case its "fatal" error ( e.g when connection is healthy and it's just for exapmle sql syntax error we can keep it )
We probably need to check all possible scenarios when connection is marked as "closed state" agains when it's actually removed from the pool, might be some missing cases
Any update on this? Or this is the final solution guys
Any update on this? Or this is the final solution guys
use connection pool
This is a follow-on to https://github.com/sidorares/node-mysql2/issues/939
I've reproduced the problem with a self-contained repository/docker container.
https://github.com/abw/mysql-disconnect
The work-around is described in the README. Thanks to @irepela and @MaksemM for providing the work-around.
In short, you should run queries through the pool which will detect timed out connections and replace them.
I'm not sure that there's any action required or anything that can be done to easily fix the problem in node-mysql2 but I wanted to post this here for visibility in case anyone else is having the same problem. The above repository hopefully makes it easy to reproduce and investigate the problem.