Closed monwolf closed 1 year ago
Agreed! Contribs against this branch are welcomed: https://github.com/ory/hydra/pull/3013
@aeneasr I see It's merged in v2.x branch, is there any ETA for this version?
Unfortunately not
Understood,.. We'll keep an eye on the releases
Hello contributors!
I am marking this issue as stale as it has not received any engagement from the community or maintainers for a year. That does not imply that the issue has no merit! If you feel strongly about this issue
Throughout its lifetime, Ory has received over 10.000 issues and PRs. To sustain that growth, we need to prioritize and focus on issues that are important to the community. A good indication of importance, and thus priority, is activity on a topic.
Unfortunately, burnout has become a topic of concern amongst open-source projects.
It can lead to severe personal and health issues as well as opening catastrophic attack vectors.
The motivation for this automation is to help prioritize issues in the backlog and not ignore, reject, or belittle anyone.
If this issue was marked as stale erroneously you can exempt it by adding the backlog
label, assigning someone, or setting a milestone for it.
Thank you for your understanding and to anyone who participated in the conversation! And as written above, please do participate in the conversation if this topic is important to you!
Thank you 🙏✌️
Preflight checklist
Describe your problem
After a year without doing cleanup in our DB we found 24GB of access_tokens with more or less 11M of records in hydra_oauth2_access table, this caused some performance issues generating new tokens and a downtime when we updated from v1.10.5 to v1.11.7.
So we tried to use janitor to delete all this tokens but we found the following issue, janitor is trying to pull 11M of signatures in a single select.
https://github.com/ory/hydra/blob/d4b2696bd72b9fc98f3959b13be2fc28aa2263bc/persistence/sql/persister_oauth2.go#L380-L384
Describe your ideal solution
Instead of getting all signatures and split them in chunks, we could change the logic and retrieve the signatures in batches and the limit should be one of the exit conditions for the batch loop.
This is more expensive in number of SQL selects but it much less expensive in resources.
Workarounds or alternatives
I considered creating a shell script that run janitor multiple times everyday.
Version
v1.11.7
Additional Context
I could try to do a PR but only if it make sense for you.