Objectives:
Primary: Remove the communication path from the server to the client - make all data flow from the client to the server, and none in the other direction.
Secondary: All QAN metadata is available even when the server is offline and can't send back EXPLAIN and table definitions.
New options:
Server side: -e DISCONNECTED=1, default to 1, set to 0 to restore old behaviour
Client side:
ssm-admin add mysql:queries --prefetch-metadata (default: true, set to false/0 to restore old behaviour). In this mode, it should NOT listen for commands via websocket for EXPLAIN of SHOW CREATE TABLE or SHOW INDEX FROM.
1) EXPLAIN
With log_slow_verbosity turned up (MariaDB, Percona, formats may be slightly different between them), this should end up in the slow log, and should be fetched from there, and recorded with the query.
Client side:
ssm-admin add mysql:queries --explain-when-not-in-slow-log (default to false)
If it it isn't in the slow log, EXPLAIN should be executed by the query harvester as soon as the query is harvested. This needs to be disablable in the config file and/or during ssm-admin add because some crazy queries heavily using views and sub-selects can require partial execution for EXPLAIN).
2) TABLES
SHOW CREATE TABLE output should be auto-harvested for each query, and pushed up with the query data. This can get quite intensive, so we shouldn't send it every time. Instead, we do the following:
If there is no file /opt/ss/qan-agent/cache/tables/schema.table, or it exists but the timestamp is more than an hour ago, we fetch the table definition, and touch that empty file.
If we have fetched the table definitions for any of the tables, we upload them with the query.
Obviously we will also need a table structure on the SSM server side to keep this data cached.
Since we don't save all query examples, only some, we should save the matching EXPLAIN plan, and the table definition available at the time with that query. So the table data for the query should be populated in the query info from the table cache that is updated at a different interval.
Since the number of table definition is pretty arbitrary, we can save it as a JSON blob in a longtext column.
It would probably be quite convenient to save this in the same format as we download it in for the report download button.
Objectives: Primary: Remove the communication path from the server to the client - make all data flow from the client to the server, and none in the other direction. Secondary: All QAN metadata is available even when the server is offline and can't send back EXPLAIN and table definitions.
New options:
Server side:
-e DISCONNECTED=1
, default to 1, set to 0 to restore old behaviourClient side:
ssm-admin add mysql:queries --prefetch-metadata
(default: true, set to false/0 to restore old behaviour). In this mode, it should NOT listen for commands via websocket forEXPLAIN
ofSHOW CREATE TABLE
orSHOW INDEX FROM
.1)
EXPLAIN
Withlog_slow_verbosity
turned up (MariaDB, Percona, formats may be slightly different between them), this should end up in the slow log, and should be fetched from there, and recorded with the query.Client side:
ssm-admin add mysql:queries --explain-when-not-in-slow-log
(default to false) If it it isn't in the slow log,EXPLAIN
should be executed by the query harvester as soon as the query is harvested. This needs to be disablable in the config file and/or duringssm-admin add
because some crazy queries heavily using views and sub-selects can require partial execution forEXPLAIN
).2) TABLES SHOW CREATE TABLE output should be auto-harvested for each query, and pushed up with the query data. This can get quite intensive, so we shouldn't send it every time. Instead, we do the following: If there is no file
/opt/ss/qan-agent/cache/tables/schema.table
, or it exists but the timestamp is more than an hour ago, we fetch the table definition, and touch that empty file. If we have fetched the table definitions for any of the tables, we upload them with the query. Obviously we will also need a table structure on the SSM server side to keep this data cached.Since we don't save all query examples, only some, we should save the matching EXPLAIN plan, and the table definition available at the time with that query. So the table data for the query should be populated in the query info from the table cache that is updated at a different interval. Since the number of table definition is pretty arbitrary, we can save it as a JSON blob in a longtext column. It would probably be quite convenient to save this in the same format as we download it in for the report download button.