Open seanlook opened 7 years ago
Hi @seanlook .
Short answer: increase max_prepared_stmt_count
in MySQL server.
Long answer: Many applications, including Laravel, make a terrible use of prepared statements. In fact, for every request, they perform: PREPARE + EXECUTE + CLOSE . In other words, to run one query, they execute 3 commands.
ProxySQL optimizes this, tracking all PREPARE
and remembering metadata, and when a client executes CLOSE
proxysql replies with OK closing it locally (not on the backend). When a new client executes the same PREPARE
, proxysql already has the metadata and doesn't execute PREPARE
on backend.
Benchmark shows that using proxysql with this algorithm can boost performance for simple point selects up to 3x compared to application sending commands directly to the database.
So ProxySQL does not close the stmt?
This is true, ProxySQL does not close statements, never.
Instead, when mysql-max_stmts_per_connection
is reached (20 by default) and the connection is sent back to the connection pool, ProxySQL resets the connection: resetting the connection implicitly closes all the prepared statements, without the need to run multiple CLOSE
.
Can you copy the output of SELECT * FROM stats_mysql_global WHERE variable_name LIKE '%stmt%';
?
Thanks.
Thanks, @renecannao
mysql> SELECT * FROM stats_mysql_global WHERE variable_name LIKE '%stmt%';
+---------------------------+----------------+
| Variable_Name | Variable_Value |
+---------------------------+----------------+
| Com_backend_stmt_prepare | 11749 |
| Com_backend_stmt_execute | 396063 |
| Com_backend_stmt_close | 0 |
| Com_frontend_stmt_prepare | 396063 |
| Com_frontend_stmt_execute | 396063 |
| Com_frontend_stmt_close | 396063 |
| Stmt_Client_Active_Total | 0 |
| Stmt_Client_Active_Unique | 0 |
| Stmt_Server_Active_Total | 220 |
| Stmt_Server_Active_Unique | 54 |
| Stmt_Max_Stmt_id | 248 |
| Stmt_Cached | 247 |
+---------------------------+----------------+
12 rows in set (0.00 sec)
mysql> show variables like "%stmt%";
+--------------------------------+-------+
| Variable_name | Value |
+--------------------------------+-------+
| mysql-max_stmts_per_connection | 20 |
| mysql-max_stmts_cache | 10000 |
+--------------------------------+-------+
2 rows in set (0.00 sec)
All my proxysql nodes show Com_backend_stmt_close
is 0.
If I have high concurrency in a shot time, would not closing the prepared statement backend is a problem?
Prepared_stmt_count
from backend has descreased to a normal level now as your explanation.
Thank you for the output. Let me explain it:
Com_backend_stmt_close
will always be 0 : it is displayed only for completenessCom_frontend_stmt_*
you can see the terrible use of prepared statements used by the client: for each request it executes 3 commands, PREPARE
+EXECUTE
+CLOSE
. This is bad because running 3 commands to execute 1 request means a lot of unnecessary network latencyCom_backend_stmt_prepare
you can see that although the clients sent 396063 PREPARE
, proxysql executed only 11749 (that is less than 3%), because it didn't close them previously. ProxySQL is boosting your performance here, because while your clients have executed 1188189 (396063x3) requests, proxysql has internally tracked all of the PS and executed on the backends only 407812 (396063+11749) requests, saving almost 66% of the traffic.Stmt_Max_Stmt_id=248
means that in total the application has executed 247 (248-1) unique PREPARE
Stmt_Server_Active_Total=220
means that currently there are 220 prepared statements prepared on servers (in total).Because mysql-max_stmts_per_connection=20
while Stmt_Max_Stmt_id=248
, it means that proxysql resets the connections a bit often. I would recommend to increase mysql-max_stmts_per_connection
a bit more (maybe 30 or 40) for even better performance!
Of course, increasing mysql-max_stmts_per_connection
you also have to increase max_prepared_stmt_count
in MySQL server.
Try to not be conservative, and increase max_prepared_stmt_count
quite high.
If I have high concurrency in a shot time, would not closing the prepared statement backend is a problem?
What can happen is that if you have a lot of connections, they will sum up. For example, if you have 20 prepared statements prepared in 80 connections, that is 16000 prepared statements (very close to the default limit max_prepared_stmt_count
).
That is why you should increase max_prepared_stmt_count
.
To sum up:
mysql-max_stmts_per_connection
max_prepared_stmt_count
in MySQL serverHi @renecannao ,
I appreciate it that let the user decide whether use the feature to boost performance or not.
I eccounter the problem in production today again, and it's really a disaster for me. I had set max_prepared_stmt_count
twice larger than before.
Would you provide a parameter to disable or enable this kind of prepared statement behavior?
@seanlook : disabling this feature is not an option right now, and isn't easy to implement. Although, I believe that if you hit this issue, maybe there is a misconfiguration (maybe!). Let's try to work together for a resolution.
What is your current value of max_prepared_stmt_count
? Is there any valid reason to not increase it to some really big value, like 1048576?
How many ProxySQL instances do you have connecting to that MySQL server?
What is the value of mysql_servers
.max_connections
?
What is being reported by SELECT * FROM stats_mysql_connection_pool
?
Re-opening the issue. Thanks
The current value of max_prepared_stmt_count
is 65528. No valide reason to not increase it to big value, not sure if it works.
There is 3 ProxySQL nodes connected to backend server.
mysql_servers.max_connections
is 5000, I do not want to limit it for now.
I have restarted proxysql , so stats_mysql_connection_pool
seems useless:
+-----------+-----------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+-----------+-----------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
| 110 | rm-host01 | 3306 | ONLINE | 0 | 3 | 1155 | 0 | 6160007 | 1024605554 | 620512315 | 1113 |
| 1010 | rm-host01 | 3306 | ONLINE | 0 | 1 | 8 | 0 | 273795 | 36857399 | 10331594 | 1113 |
| 1010 | rr-host02 | 3306 | ONLINE | 0 | 5 | 72 | 0 | 2390422 | 321768024 | 92324930 | 1075 |
| 1010 | rr-host03 | 3306 | ONLINE | 0 | 254 | 887 | 0 | 25029018 | 3365694616 | 973855274 | 937 |
+-----------+-----------+----------+--------+----------+----------+--------+---------+----------+-----------------+-----------------+------------+
3 nodes times 5000 max_connections
times 20 mysql-max_stmts_per_connections
= potentially, 300000 prepared statements.
I think you should definitively increase max_prepared_stmt_count
.
But more than max_prepared_stmt_count
, my concern is on the number of connections. ProxSQL is designed to drastically reduce the number of connections to backends. Why are you using such a big value?
Also note that MySQL Server doesn't handle very well a large number of connections. Here an example of how MySQL doesn't handle well a lot of connections, ad how limiting the number of connections in ProxySQL can boost performance: https://www.percona.com/blog/2016/05/19/fixing-mysql-scalability-problems-proxysql-thread-pool/
Hello Rene,
How is it possible that Stmt_Server_Active_Total went all the way up to 16.5k:
+---------------------------+----------------+
| Variable_Name | Variable_Value |
+---------------------------+----------------+
| Com_backend_stmt_prepare | 402827 |
| Com_backend_stmt_execute | 899605 |
| Com_backend_stmt_close | 0 |
| Com_frontend_stmt_prepare | 900848 |
| Com_frontend_stmt_execute | 900035 |
| Com_frontend_stmt_close | 899332 |
| Stmt_Client_Active_Total | 11 |
| Stmt_Client_Active_Unique | 11 |
| Stmt_Server_Active_Total | 16551 |
| Stmt_Server_Active_Unique | 14014 |
| Stmt_Max_Stmt_id | 16273 |
| Stmt_Cached | 14736 |
+---------------------------+----------------+
When max amount of connections to all nodes would be 600 (3 nodes, 200 per node)
***** DUMPING runtime_mysql_servers *****
+--------------+----------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+----------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10 | 192.168.10.173 | 3306 | ONLINE | 1 | 0 | 200 | 0 | 0 | 0 | |
| 11 | 192.168.10.174 | 3306 | ONLINE | 1 | 0 | 200 | 0 | 0 | 0 | |
| 11 | 192.168.10.173 | 3306 | ONLINE | 1 | 0 | 200 | 0 | 0 | 0 | |
| 10 | 192.168.10.174 | 3306 | OFFLINE_SOFT | 1 | 0 | 200 | 0 | 0 | 0 | |
+--------------+----------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
***** END OF DUMPING runtime_mysql_servers *****
and max_stmts_per_connection was at default of 20
| mysql-max_stmts_per_connection | 20
| mysql-max_stmts_cache | 10000
Sounds like it's leaking statements when resetting the connection?
Hi, I'm running into this - also using Laravel/Eloquent - did you ever found a solution for this?
Thank you
Well, nvm, I found that setting PDO::ATTR_EMULATE_PREPARES => true,
"solves" this.
'connections' => [
'mysql' => [
'driver' => 'mysql',
'read' => [
'host' => env('DB_HOST_READ', env('DB_HOST', '127.0.0.1')),
'port' => env('DB_PORT_READ', env('DB_PORT', 3306)),
],
'write' => [
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', 3306),
],
'sticky' => true,
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'unix_socket' => env('DB_SOCKET', ''),
'charset' => env('DB_CHARSET', 'utf8mb4'),
'collation' => env('DB_COLLATION', 'utf8mb4_unicode_ci'),
'prefix' => env('DB_PREFIX', ''),
'strict' => env('DB_STRICT_MODE', true),
'engine' => env('DB_ENGINE', null),
'timezone' => env('DB_TIMEZONE', '+00:00'),
'options' => array(
PDO::ATTR_EMULATE_PREPARES => true,
),
],
Instead, when mysql-max_stmts_per_connection is reached (20 by default) and the connection is sent back to the connection pool, ProxySQL resets the connection: resetting the connection implicitly closes all the prepared statements, without the need to run multiple CLOSE .
When "connection is sent back to the connection pool" happens? When application closes connection to proxysql? For example, my application have connected to proxysql for many days, so, it will not reset when max_stmts_per_connection reached? The only way is to reconnect to proxysql from application?
ProxySQL Version: proxysql-1.4.3-1-centos67.x86_64.rpm
Laravel use prepared statement default.
This's a emergency that happened in my production:
my backend db status:
In my other db instances that have no proxysql frontend,
Com_stmt_close
is very close toCom_stmt_prepare
. So ProxySQL does not close the stmt? It's ok for now after I restart the proxysql.By the way, all
PREPARE
andEXECUTE
count number fromstats_mysql_commands_counters
is 0. But there is binary prepared statement executed:Thank you!