Open ethaniel opened 7 years ago
I expected proxysql to lock the connection to a backend if a temp table is created.
Your expectation is correct. Can you check error log to see if something odd is reported there?
No error in the proxysql log. I only get "table doesn't exist" error on the client.
I am on ProxySQL version 1.3.2-1, codename Truls.
CREATE TEMPORARY TABLE
explicitly disables multiplexing for that hostgroup.
I saw you just commented on version: the same apply also for 1.3.2-1 .
Yet, disabling multiplexing doesn't disable routing, so if you have a route that sent CREATE TEMPORARY TABLE
to hostgroupX but SELECT
to hostgroupY , the table won't show up.
So you need to ensure that CREATE TEMPORARY TABLE
and the SELECT
on the same table are executed on the same hostgroup.
You are right. I had the following routing:
DELETE FROM mysql_query_rules WHERE rule_id IN (200,201);
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(200,'',500,1,3,'^SELECT.*FOR UPDATE',1);
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(201,'',501,1,3,'^SELECT ',1);
LOAD MYSQL QUERY RULES TO RUNTIME
SAVE MYSQL QUERY RULES TO DISK
500 is my WRITE hostgroup, 501 is my READ hostgroup.
I added a "CREATE TEMPORARY TABLE" to the READ hostgroup and everything was fixed:
DELETE FROM mysql_query_rules WHERE rule_id IN (200,201,202);
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(200,'',500,1,3,'^SELECT.*FOR UPDATE',1);
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(201,'',501,1,3,'^SELECT ',1);
INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(202,'',501,1,3,'^CREATE TEMPORARY TABLE ',1);
LOAD MYSQL QUERY RULES TO RUNTIME
SAVE MYSQL QUERY RULES TO DISK
Hope this thread reaches someone who googles with the same problem. Thank you, Rene.
Great! Let's keep this thread open, so I will convert it into a documentation item
@ethaniel The one issue I can see with this solution using the query rules, is that it would limit the functionality of CREATE TEMPORARY TABLE to only be used on specific host groups.
So in this configuration, it could never be used for writes in the write hostgroup.
Ideally, CREATE TEMPORARY TABLE routing and subsequent query routing would somehow be maintained for the entire MySQL session, similar to how it is handled natively in MySQL. @renecannao I'm not sure if this plays into your plans for the added routing capability you were planning on adding.
@leeclemens :
Ideally, CREATE TEMPORARY TABLE routing and subsequent query routing would somehow be maintained for the entire MySQL session, similar to how it is handled natively in MySQL.
If I read it correctly, you are saying that once CREATE TEMPORARY TABLE
is created, both multiplexing and routing should be disabled.
This can be a very safe approach, but is not necessarily what you want.
In 1.3 routing is already complex but easily error-prone if not configured correctly.
In 1.4 I am planning to have routing safer for simple setups (for example, multiplexing disabled by default) yet way more configurable for experienced DBAs that wants to tweak routing as much as possible.
@renecannao I'm still not entirely clear about the differences between multiplexing and routing when it comes to the MySQL sessions and connections.
If you could point me to some docs or code that might help explain it to me I would appreciate it.
@leeclemens : the differences between the two is described in few blogs/issues, although I took a note to write a blog post to better explain the difference.
Because of query routing, a single client can communicate to multiple hostgroups using different connections. I think this is simple to understand.
With multiplexing: a single backend connection can be used simultaneously by multiple clients "at the same time" .
In a classic connection pool, a connection to the backend returns to the pool when the client disconnects. In ProxySQL, because of multiplexing, a connection is returned to the connection pool as soon as the query completes for circumstances that ProxySQL defines safe.
For example, multiplexing is disabled if there are user variables defined with @
, or if there is a transaction (until it commits/rollbacks), if there are temporary tables, etc.
In other words:
@renecannao Thanks for the explanation. That makes sense from my understanding related to the sql_mode tickets I opened.
For the record, I'm @leeparayno and sorry for any confusion @leeclemens for clouding your inbox. ;)
Sorry for the name confusion :(
@leeparayno : sql_mode
should be handled in 1.4 already, if you want to give it a try
@renecannao I tried the solution @ethaniel proposed to try to allow the read only hostgroup to handle both the CREATE TEMPORARY TABLE and the SELECT that follows from it, however, in a read hostgroup with multiple MySQL replicas, I'm seeing the issue where the temp table is created on one slave, presumably, and then the select is getting sent to the other slave in the hostgroup. By your definitions, the query routing might be the related issue in this case.
2017-03-18 01:19:35 MySQL_Session.cpp:2253:handler(): [WARNING] Error during query on (1,master.website.com,3306): 1146, Table 'performance_db.temp_table_for__911858' doesn't exist
2017-03-18 01:19:56 MySQL_Session.cpp:2253:handler(): [WARNING] Error during query on (2,replica2.website.com,3306): 1050, Table 'performance_db.temp_table_for_721' already exists
2017-03-18 01:20:19 MySQL_Session.cpp:2253:handler(): [WARNING] Error during query on (1,master.website.com,3306): 1146, Table 'performance_db.temp_table_for_944228' doesn't exist
2017-03-18 01:20:19 MySQL_Session.cpp:2253:handler(): [WARNING] Error during query on (1,master.website.com,3306): 1146, Table 'performance_db.temp_table_for_944228' doesn't exist
However, it also looks like I'm having a different issue, where the SELECT for the temp table went to the master host group, where the CREATE TEMPORARY TABLE may be been sent to the read hostgroup. I do have the user set to be transaction_persistent = 1 to the default_hostgroup = 1 (master hostgroup). I don't believe all of these operations for these queries being sent to the master hostgroup are being done inside of a transaction, in which I would have expected all of the operations (CREATE TEMPORARY TABLE, and following SELECT queries) to be sent to the sole master in the master hostgroup.
We have some code that will also look for a previously created temp table by the predefined name and drop it. So I would probably also need to have an additional query rule to have DROP TEMPORARY TABLE be located on the same server as the other series of queries. This is probably represented in the partial log entries where it is showing as the table already exists on replica2.website.com MySQL replica database host.
Currently (as of most recent test): INSERT INTO mysql_query_rules(active, username, schemaname, match_pattern,destination_hostgroup,apply) VALUES(1,'dbuser','perf_master','^SELECT.*FOR UPDATE$',1,1);
INSERT INTO mysql_query_rules(active, username, schemaname, match_pattern,destination_hostgroup,apply) VALUES(1,'dbuser','perf_master','^SELECT',2,1);
INSERT INTO mysql_query_rules(active, username, schemaname, match_pattern,destination_hostgroup,apply) VALUES(1,'dbuser','perf_master','^CREATE TEMPORARY TABLE',2,1);
First , transaction_persistent = 1
has nothing to do here as it matters only with transactions.
From the errors above, it seems some queries related to that table are sent to master: I don't know which queries, but probably they are DML: according to your query rules, DML should go to the master.
In fact, CREATE TEMPORARY TABLE
disables multiplexing (from that moment, all queries to hostgroup 2 will go to the same connection), but doesn't disable query routing: queries from a client can still go to master or slave depending from rules.
I'm seeing the issue where the temp table is created on one slave, presumably, and then the select is getting sent to the other slave in the hostgroup
This should not happen. Once multiplexing is disabled, queries to be send to slaves are sent to one connection only: always the same connection. Unless that connection is broken... Do you have a reproducible test case, or can you collect traffic with tcpdump? Thanks
@renecannao I don't currently have a reproducible test case. Our load test runs a complete set of all web functionality, including reads and writes of various levels.
I can try to get a tcpdump from our next run. We currently have ProxySQL deployed across 3 app servers.
I've been trying to trace behavior from the ProxySQL logs. The other day, I was investigating what other options were possible to turn ProxySQL or debug logging. I tried to run gdb on ProxySQL and run it, but I couldn't figure out why none of my breakpoints were showing any of the code. None of the breakpoints appeared to be matching up to the code I had checked out from Git, and switched to the release branch for 1.3.4, matching the version I'm running currently. I believe I used the debug version of the RPM to install.
@renecannao It's not easy to track whether queries route correctly. In v1.3.5 debug version, I don't find any debug_levels table as introduced by wiki Multi-layer-configuration-system
@seanlook : debug_levels
should be there:
mysql> select @@version;
+------------------------+
| @@version |
+------------------------+
| 1.3.5-2-gdba6db1_DEBUG |
+------------------------+
1 row in set (0.00 sec)
mysql> show tables;
+--------------------------------------+
| tables |
+--------------------------------------+
| debug_levels |
| global_variables |
| mysql_collations |
| mysql_query_rules |
| mysql_replication_hostgroups |
| mysql_servers |
| mysql_users |
| runtime_global_variables |
| runtime_mysql_query_rules |
| runtime_mysql_replication_hostgroups |
| runtime_mysql_servers |
| runtime_mysql_users |
| runtime_scheduler |
| scheduler |
+--------------------------------------+
14 rows in set (0.00 sec)
Although, the debug feature related to debug_levels
table is quite outdated.
Maybe you should look into query logging to understand where queries are routed. See an example here
hi there, we have a similar situation where the temp tables are always created on the master (due to db user default_hostgroup setting), but then subsequent selects involving those temp tables are routed to one of the slaves (due to basic r/w split select query routing to the reader hostgroup). is there a way (using the FLAG_IN/FLAG_OUT stuff for instance) that we can ensure that all queries following a create temp table query (or any other query pattern for that matter) will definitely use the same backend connection? in other words, can we get proxysql to just ignore query rules completely once a create temp table query has been fired so that the data in those temp tables can actually be used? thanks a bunch!
I issued 2 queries:
CREATE TEMPORARY TABLE
upd(
user_idINT NOT NULL ,
auser_idINT NOT NULL ,
album_idINT NOT NULL ,
album_nameTEXT NOT NULL ,
video_idINT NOT NULL ,
video_nameTEXT NOT NULL,
accessTINYINT NOT NULL, KEY
user_id(
user_id) ) ENGINE = MYISAM
and
SELECT count(*) FROM
upd;
proxysql returned me an error:
1146: Table 'gallery.upd' doesn't exist
Am I doing something wrong? I expected proxysql to lock the connection to a backend if a temp table is created.