nanit / kubernetes-rabbitmq-cluster

Deploy-ready rabbitmq cluster for kubernetes
MIT License
197 stars 85 forks source link

Huge amount of worker.heartbeat #11

Closed tsn77130 closed 7 years ago

tsn77130 commented 7 years ago

Hi, I have a huge amount of worker.heartbeat messages in my cluster (around 6 millions), which seems to came progressively since a few hours (cluster works from ~10 days).

Do you know where it could come from ?


Exchange    celeryev
Routing Key worker.heartbeat
Redelivered ●
Properties  
priority:   0
delivery_mode:  2
headers:    
hostname:   celery@cel.celeryapp-resync-orders-3786959454-53q95
content_encoding:   utf-8
content_type:   application/json
Payload
309 bytes
Encoding: string
{"sw_sys": "Linux", "clock": 9033333, "timestamp": 1498082536.210317, "hostname": "celery@cel.celeryapp-orders-3786959454-53q95", "pid": 34, "sw_ver": "3.1.18", "utcoffset": -2, "loadavg": [1.26, 1.3, 1.47], "processed": 0, "active": 0, "freq": 2.0, "type": "worker-heartbeat", "sw_ident": "py-celery"}
tsn77130 commented 7 years ago

it is spread like that

`

celeryev.8dcc855c-8287-4006-985f-19786ae0c297 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,996 0 233,996 7.8/s 0.00/s 0.00/s celeryev.ff892714-d2e9-4689-8059-a417d02cfa93 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.f44470e4-36ae-49ea-811f-41c824f2bbf1 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.eb6146d5-d10a-4722-ad97-751ebe576a38 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.cf71e1a3-ccb0-4f58-b30c-1d67e2b35653 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.cf041ada-b1be-4144-a2fd-18a973fd3b4f rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.a35dc5c1-4ce2-45ef-abb0-e3f47f006851 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.7ed59dcb-ab95-47a1-99b3-610c6f52a452 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.74079c5a-b4cd-40d1-bbe3-47312cdbc7f4 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s celeryev.668f94e8-20d7-4843-bf28-188f71eb8af5 rabbitmq-1.rmq-cluster +0 +2 AD ha-all running 233,728 0 233,728 7.8/s 0.00/s 0.00/s`

erez-rabih commented 7 years ago

That looks something with Celery workers I'm not sure it has anything to do with RabbitMQ

tsn77130 commented 7 years ago

yes indeed, queues seems to have no consumers, so they grown up endlessly.

I'll look into celery, i close this issue.