mher / flower

Real-time monitor and web admin for Celery distributed task queue
https://flower.readthedocs.io
Other
6.38k stars 1.08k forks source link

Possible memory leak #770

Open SharpEdgeMarshall opened 6 years ago

SharpEdgeMarshall commented 6 years ago

We are experiencing a continuous memory increase in flower running on Kubernetes.

Flower version: 0.9.2 Docker file: https://hub.docker.com/r/ovalmoney/celery-flower/~/dockerfile/ Parameters:

Queues: 7 Workers: 17

screen shot 2018-01-15 at 11 17 05
johnarnold commented 6 years ago

Does it grow with tasks ran? What's your max tasks set to?

jsynowiec commented 4 years ago

I don't want to create a new issue, so here's some details from my case:

I have an instance of Flower running on Marathon monitoring Redis-based Celery workers. On regular basis the instance is OOM killed by scheduler because of what looks like a obvious memory leak.

image

Queues: 42 Workers: At least 38 (micro-scaled based on the queues length)

The worst thing is that it happens even if we leave everything idle, not processing tasks.

Flower 0.9.3 running as a supervised process.

Natalique commented 3 years ago

Having same issue with flower 0.9.5 running in a kubernetes cluster.

mher commented 3 years ago

https://github.com/mher/flower/pull/1111 might be the cause of memory leak. Please try the latest master version

jrochette commented 2 years ago

Having the same issue with the latest image available on dockerhub running in kubernetes.

martin-thoma commented 2 years ago

With flower==1.0.0 (latest version on PyPI):

image

jrochette commented 2 years ago

With flower==1.0.0 (latest version on PyPI):

Any plan on publishing a docker image with flower 1.0.0 in dockerhub? We built one internally to try 1.0.0 in k8s, but would prefer getting it from dockerhub if possible.

jpopelka commented 1 year ago

A container running in OpenShift from docker.io/mher/flower:1.2.0 image is periodically OOMKilled and restarted. We have just 2 queues and 3 workers.

Screenshot from 2023-01-20 10-24-05

babinos87 commented 7 months ago

I can confirm that same behaviour seems to happen in our EKS cluster using flower 2.0.1: image

cesar-loadsmart commented 1 month ago

I'm seeing the same behaviour with Flower 2.0.1. Any plans to address that?

cesar-loadsmart commented 1 month ago

@babinos87 have you solved this somehow?

babinos87 commented 1 month ago

@cesar-loadsmart I ended up removing the persistent volume and setting not-persisting tasks in flower config, e.g. persistent = False. That helped, but I couldn't fix the issue when persistent mode was enabled.