bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.83k stars 9.12k forks source link

[bitnami/rabbitmq] BLUF: We could predict scaling (HPA) needs with custom Prometheus metrics if we had a "rabbitmq_queue_messages_consumed" metric #29202

Open CrowSoda opened 1 week ago

CrowSoda commented 1 week ago

Name and Version

bitnami-rabbitmq-14.3.1

What is the problem this feature will solve?

Background:

I am using a Prometheus custom metric to Horizonal Pod Autoscale based on RabbitMQ Queue Depth, using the metric "rabbitmq_queue_messages_ready"

That's a fine metric, and I can use that, but I want to employ an algorithm that will correlate that to messages already consumed for that queue, thus we'd end up with:

1.) The average time it takes that queue to complete a task 2.) How many tasks we have left

We can then use that to see if the worker will fall behind and (read: plot the slope trend) and have it preemptively scale up rather than retroactively.

What is the feature you are proposing to solve the problem?

Make available a new metric that allows insight into messages consumed:

What alternatives have you considered?

I can just use "rabbitmq_queue_messages_ready" but it isn't as robust as correlating that with messages consumed.

javsalgar commented 1 week ago

Hi!

Thank you for using Bitnami. If I read correctly, it seems that you want the RabbitMQ prometheus plugin to expose a new metric, right? If that's the case, that would be something to check with the upstream RabbitMQ devs, as this is not related with the Bitnami packaging of RabbitMQ.

https://github.com/rabbitmq/rabbitmq-server