Open coneybeare opened 2 months ago
What's the deployment setup that you are running? Django on K8s? What prevents you from deploying one celery-exporter per application / K8s namespace?
We have lots of different apps per env/namespace, with separate brokers. Without merging, we need a unique exporter pod per app per namespace, when we could just have one per namespace
It’s just numerous pods to manage when we could just do three, less overhead, especially when the merging is almost free. Thought it might be a useful enhancement for others too, so thought I would ask!
How would you differentiate metrics from two different Django apps?
Not sure how Django differs in its default setup, but we setup our fastapi/celery tasks to use different queue names by app and purpose, and can filter dashboards in grafana based on the the hostname
task name
and queue_name
labels using promql.
but we setup our fastapi/celery tasks to use different queue names by app and purpose
Are all of the applications running against the same broker (Redis / RabbitMQ) instance? Or do you have one for each? If it's one for each, how would you differentiate application A from application B, if they're using the same task and queue names?
We have different task and job names and different back ends, but in the case somebody else set it up that way across different backends, hostname could work to disambiguate On Oct 17, 2024 at 12:49 -0400, Dani Hodovic @.***>, wrote:
but we setup our fastapi/celery tasks to use different queue names by app and purpose Are all of the applications running against the same broker (Redis / RabbitMQ) instance? Or do you have one for each? If it's one for each, how would you differentiate application A from application B, if they're using the same task and queue names? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I'd love to pass a comma separated list of
CE_BROKER_URL
s and have this spin up some multiprocessing under the hood, merging the results in the prometheus metrics endpoint. This is possible with Multiprocess Mode on the standard prometheus_client library. Are there any plans to add this functionality?