Closed paxw-panevo closed 1 year ago
To be honest with you I don't know. I don't think we constrain the memory of the exporter in our environment. Maybe @adinhodovic can fill you in.
@paxw-panevo
Usage over 2 days, was upgraded before that. Yeah, I see a steady climb although not as consistent as you see and not as aggressive. Not sure why though, maybe we'll need restarts after x amount of memory built-in like Celery has in settings. Also, old metrics are not rotated out in the exporter atm (e.g workers going offline), our workers have static hostnames otherwise metrics cardinality becomes huge (each host/metrics has individual metrics) that sticks till the restart of the worker.
We increased the memory limit and it's looking like it's reaching a plateau around ~125MB. A definite improvement to the service crash that happens every 3-4 minutes
I appreciate the responses @danihodovic @adinhodovic (quick, too!) -- Hope you guys have a good Christmas.
Merry Christmas and Happy New Year :santa:
We have a docker service that's using the image
danihodovic/celery-exporter
We have a memory limit set for this service at 100M. However, we find that this celery-exporter service's memory use steadily climbs up and consistently around ~3m mark, this limit is reached and the celery-exporter service gets killed and starts again. So I was wondering how much memory is usually needed to be able to run this tool?