Closed boekkooi-fresh closed 4 years ago
Hi @boekkooi-fresh , thanks for reporting this issue. Nice catch!
Yes, your suggestion is a good idea. ✨
I've started working on this issue.
@boekkooi-fresh Just a thought, do you have any issues about that the success count keeps growing?
Although the count keeps growing if we keep calling Ganesha::success()
, the count will be cleanup once either Ganesha::failure()
or Ganesha::isAvailable()
is called.
In other words, it is guaranteed that the count loaded via Redis::load()
is cleanup.
It seems polite that the count stored in Redis to be cleaned up in realtime but it has trade-off relation with performance.
Based on when we where testing the cleanup didn't happen on Ganesha::isAvailable()
for some reason. Sadly enough after evaluating Ganesha the company went for another direction are we are now using the istio circuitbreaker where possible,
If wish you all the best of luck with the library!
It is a shame that Ganesha can't get adopted but I wish the success for the company and I believe that the system will get more resilient. ✨
Thanks for your feedback to this project!
When using Ganesha with the Rate + Redis Adapter i noticed that the redis key for successes keeps growing and growing until a failure occurs. This seems to be because
storage->getSuccessCount()
which callszRemRangeByScore
as part ofload()
is never called meaning that the key is never cleaned out.Maybe adding a cleanup for these keys based on some sort of sampling concept is a good idea?