Open chukyz opened 5 months ago
Additional information: In our openshift environment i have 10 pods of hydra 1.11.10 using this database.
Hi we are trying to search why happens the stuck to the database, i give you some more information: We have until now in the table: hydra_oauth2_refresh, a total of: 26016447, the total that are in active=false state are: 25343718 and that are with active=true are: 672729. Why hydra is not deleting the ones that are in active=false or hydra used for another thing? We have a TTL refresh token of 180days, but if i search the refresh token that are in active=false and column requested_id lesser than (actual date 06/06/2024) - 180, we have a total of: 81646
Any one can give us some solution to reduce this table and check if this is what produce the stuck in the database?
Thanks!
Hi, any update! .Can Ory specialist confirm if this procedure (https://github.com/ory/hydra/issues/2514#issuecomment-1050754956) of manual cleanup is ok and will not produce any error in hydra and future migration, etc. We increase mem and cpu in database server, but in some moments the connection in the database increased too high, so one think is the amount of rows in tokens, as we do not know which are the logical inside the hydra, we need some confirmation about the cleanup of old data to reduce overload and mantain perfomence in database.
Thanks!
Hi, do you have news? today the database went down, lots of updates sentence setting in active=false refresh tokens, do you can confirm what we commented previously could we used it to delete old registries that hydra is not deleteing?
Thanks!
Preflight checklist
Ory Network Project
No response
Describe the bug
Hi, we are using Hydra 1.11.10 and mysql 5.7.30, during the last 15 days we had problems in which the connection between hydra and mysql increase from 50 to 1000 (That is the max connection of the database), if we increase this number the used connection increase to the max setting number. We notice that when this situation happens we see lots of queries like:
Rank Query ID Response time Calls R/Call ==== =================================== ================== ===== ====== 1 0xEF485ED562FE133754F2419D89227CBD 1876449.4773 _94.2% 9582 195.8307 26... UPDATE hydra_oauth?refresh
Query 1: 0.19 QPS, 37.03x concurrency, ID 0xEF485ED562FE133754F2419D89227CBD at byte 8165427 This item is included in the report because it matches ### --limit. Scores: V/M = 262.05 Time range: 2024-06-03T08:03:07 to 2024-06-03T22:07:41 Attribute pct total min max avg 95% stddev median ============ === ======= ======= ======= ======= ======= ======= ======= Count 64 9582 Exec time 94 1876449s 10s 2099s 196s 622s 227s 113s Lock time 5 153s 38us 41s 16ms 167us 713ms 80us Rows sent 0 0 0 0 0 0 0 0 Rows examine 2 1.57M 0 1.74k 171.58 511.45 178.25 112.70 Query size 18 898.31k 96 96 96 96 0 96 String: Databases hydra_live (9200/96%), hydrapcp_l... (382/3%) Hosts {DELETED} (2039/21%)... 5 more Users hydra_live (9200/96%), hydrapcp_l... (382/3%) Query_time distribution
1us 10us 100us 1ms 10ms 100ms 1s 10s+ ################################################################
UPDATE hydra_oauth2_refresh SET active=false WHERE request_id='[DELETED ID]'\G
Reproducing the bug
We could not reproduce the problem, it happens randomly
Relevant log output
No response
Relevant configuration
No response
Version
hydra 1.11.10
On which operating system are you observing this issue?
Linux
In which environment are you deploying?
Kubernetes
Additional Context
No response