Closed aadant closed 7 months ago
Hi @aadant - thanks for reaching out! I've never seen this query take a long time for me on my busiest servers so that's interesting you are. Any chance you can look at the sys view to determine which table is the culprit? It's possible that information_system.INNOBO_TRX
is where the performance hit is.
correct, this view is shipped with MySQL but not all sys views are safe to use in production, repetitively, especially the information_schema
ones (cc @lefred )
Gotcha. Is there a better way to get this information?
On Mon, Feb 5, 2024, 1:04 PM aadant @.***> wrote:
correct, this view is shipped with MySQL but not all sys views are safe to use in production, especially the information_schema ones (cc @lefred https://github.com/lefred )
— Reply to this email directly, view it on GitHub https://github.com/charles-001/dolphie/issues/45#issuecomment-1927651053, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADFBRUPKUUY2LWX6UEL62D3YSENJ5AVCNFSM6AAAAABC2P4SIKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRXGY2TCMBVGM . You are receiving this because you commented.Message ID: @.***>
https://dev.mysql.com/doc/mysql-perfschema-excerpt/8.3/en/performance-schema-data-locks-table.html performance_schema is typically safer to run
transaction information can be found here but it does not contain the same information
MySQL [(none)]> SELECT * FROM performance_schema.events_transactions_current LIMIT 1\G
*************************** 1. row ***************************
THREAD_ID: 1
EVENT_ID: 22
END_EVENT_ID: 22
EVENT_NAME: transaction
STATE: COMMITTED
TRX_ID: 422211877866008
GTID: AUTOMATIC
XID_FORMAT_ID: NULL
XID_GTRID: NULL
XID_BQUAL: NULL
XA_STATE: NULL
SOURCE: handler.cc:1357
TIMER_START: 24555263398000
TIMER_END: 24555270944000
TIMER_WAIT: 7546000
ACCESS_MODE: READ WRITE
ISOLATION_LEVEL: REPEATABLE READ
AUTOCOMMIT: YES
NUMBER_OF_SAVEPOINTS: 0
NUMBER_OF_ROLLBACK_TO_SAVEPOINT: 0
NUMBER_OF_RELEASE_SAVEPOINT: 0
OBJECT_INSTANCE_BEGIN: NULL
NESTING_EVENT_ID: NULL
NESTING_EVENT_TYPE: NULL
1 row in set (0.00 sec)
I believe we can join to threads
table to get the sql text, right? data_locks
is an 8.0 table so that panel would no longer be available on 5.7 unless I kept the original implementation (which I can do)
Looking more into the data_locks
table, it seems it isn't very straight-forward to implement. Do you have an idea of what query to run to retrieve valuable information to display @aadant? Looking at how @lefred does it in his locks
MySQL shell plugin, it seems there's a bit of work involved and it isn't fast to poll.
Indeed @lefred’s code also references innodb_trx which is also mentioned in the manual.
i suggest you just time out the query using a maximum query time (10s ?)
https://dev.mysql.com/blog-archive/server-side-select-statement-timeouts
That's definitely a hacky fix if you're OK w/ that. It's just that dolphie would constantly be trying to run the query and get killed every time. If we can get a performant query that provides similar results, that'd be great.
If we can't get a performant query, maybe a more permanent fix is to add a parameter to disable locks monitoring for environments like yours.
It's worth noting that the processlist query uses information_schema.innodb_trx
as well which you're not having any issue with.
I've decided to turn this query off by default unless the locks panel is open. There will be a new parameter named --historical-locks
that will allow users to let the query run when the locks panel isn't open so the data can be saved to its graph. I don't see a reason why it should be always running by default.
This has been pushed to v4.1.0
Thanks @charles-001 ! Information_schema queries that run all the time should be replaced with non locking harmless performance_schema queries.
Or not run at all
@aadant - I agree, but I'm not sure if there's PFS table(s) that replaces information_schema.innodb_trx
entirely. Or maybe there is I just don't know which tables in PFS to join to. If you can come up w/ a query that enables us to take away information_schema.innodb_trx
, I'd be open to it.
actually MySQL related bugs like https://bugs.mysql.com/bug.php?id=109539 and https://bugs.mysql.com/bug.php?id=112035
are bad for people using sys.innodb_lock_waits
. (cc @lefred ) Fortunately Dolphie is not running them anymore.
we noticed that the dolphie query was taking an increasing long time. Maybe we should time it or run it on demand only ?
information_schema tables are unsafe to run on busy system unlike performance_schema ones.