I'm running celery-exporter against a GCP redis memorystore. I have configured podmonitoring to drop most metrics besides celery_task_succeeded_total and celery_queue_length. For each celery-worker we have about 100 tasks. Which will have a separate queue length metric for each task/celery-worker combination. Up until this point everything is fine.
However we rotate our pool of celery-workers every night, which result in new instance names being generated. And celery-exporter will continue to create metrics (with 0 values) for offline exporters as long as they are present in redis. This result in over a million metrics after a month.
Any recommendations to work around it? Deleting old exporter keys from redis is not an option.
I'm running celery-exporter against a GCP redis memorystore. I have configured podmonitoring to drop most metrics besides
celery_task_succeeded_total
andcelery_queue_length
. For each celery-worker we have about 100 tasks. Which will have a separate queue length metric for each task/celery-worker combination. Up until this point everything is fine. However we rotate our pool of celery-workers every night, which result in new instance names being generated. And celery-exporter will continue to create metrics (with 0 values) for offline exporters as long as they are present in redis. This result in over a million metrics after a month. Any recommendations to work around it? Deleting old exporter keys from redis is not an option.