Open berlinguyinca opened 5 years ago
Are you using no_ack=True? If so you are probably hitting this. https://github.com/eandersson/amqpstorm/issues/34
Basically with no_ack set to true. RabbitMQ will keep sending messages over and over again, and because of how AmqpStorm was designed it will just keep adding those messages to the buffer indefinitely.
The easiest solution is to just change the consumer to use no_ack=False. This will cause RabbitMQ to only send messages equivalent to your qos setting, qos defaults to 0 (which translates to a max of 32k messages).
I can look at implementing back-pressure on large message buildups as well. I just don't have a good pattern for it at the moment.
thank you this explains it and appreciate it!
On Thu, Jul 11, 2019 at 8:11 PM Erik Olof Gunnar Andersson < notifications@github.com> wrote:
Are you using no_ack=True? If so you are probably hitting this.
34 https://github.com/eandersson/amqpstorm/issues/34
Basically with no_ack set to true. RabbitMQ will keep sending messages over and over again, and because of how AmqpStorm was designed it will just keep adding those messages to the buffer indefinitely.
The easiest solution is to just change the consumer to use no_ack=False. This will cause RabbitMQ to only send messages equivalent to your qos setting, qos defaults to 0 (which translates to a max of 32k messages).
I can look at implementing back-pressure on large message buildups as well. I just don't have a good pattern for it at the moment.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/eandersson/amqpstorm/issues/76?email_source=notifications&email_token=AAAD73D7ZXGHS5LCO4BYRHDP66HYXA5CNFSM4IAUTT5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZX3CGI#issuecomment-510636313, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAD73CBACL77UPUEOQ4ZODP66HYXANCNFSM4IAUTT5A .
Lead Developer - Fiehnlab, UC Davis
gert wohlgemuth
work: http://fiehnlab.ucdavis.edu/staff/wohlgemuth
phone: 530 665 9477
coding blog: http://codingandmore.blogspot.com
linkedin: http://www.linkedin.com/profile/view?id=28611299&trk=tab_pro
haven't looked at the code but would it be possible to stop reading from the socket if the buffer is over a certain size? (could cause other problems like missed heartbeats etc though)
Yea I think that would be worth implementing. When I wrote this originally I relied on heartbeats to keep the connection open, but since then the design has changed and the connection will stay healthy as long as data is flowing (in both directions).
One thing that makes it difficult to track how much data is actually built up is that the data is moved off the main buffer and directly onto the channels inbound queue. So we would need to combine the total size of the data (or the number of frames) on all channels and aggregate it back to the connection layer.
Another possible side-effect of that is that one channel might block (or slow down) another channel from getting new messages. Since on the socket level we wouldn't know the intended target channel / consumer.
I seem to have the same problem, but no_ack=False
(+ prefetch_count=1
) didn't solve it :( My gunicorn workers sometimes silently reboot, and I can't find out why. I emulated some load test on a local machine in a Docker container and see that CPU load and memory usage are times higher than without using amqp...
I seem to have the same problem, but
no_ack=False
(+prefetch_count=1
) didn't solve it :( My gunicorn workers sometimes silently reboot, and I can't find out why. I emulated some load test on a local machine in a Docker container and see that CPU load and memory usage are times higher than without using amqp...
Would you be able to provide some example code to illustrate the issue? The no_ack=True
in this thread is a very specific issue where the application is consuming many thousands messages but are only able to process them slowly.
I seem to have the same problem, but
no_ack=False
(+prefetch_count=1
) didn't solve it :( My gunicorn workers sometimes silently reboot, and I can't find out why. I emulated some load test on a local machine in a Docker container and see that CPU load and memory usage are times higher than without using amqp...Would you be able to provide some example code to illustrate the issue? The
no_ack=True
in this thread is a very specific issue where the application is consuming many thousands messages but are only able to process them slowly.
Sorry, seems like my callback function is a culprit, there's an audio processing in a background. Simple message consumption (without the actual processing) didn't show the abnormal memory/CPU usage.
hi,
we run into another problem, basically for some reason the amqp stack is consuming ungodly amounts of memory and we feel we are doing something wrong.
example using tracemalloc:
Top 30 lines
1: pamqp/frame.py:62: 951.8 MiB
2: pamqp/decode.py:417: 361.4 MiB
3: pamqp/decode.py:296: 165.6 MiB
4: pamqp/decode.py:258: 154.9 MiB
5: pamqp/header.py:84: 109.0 MiB
6: pamqp/body.py:21: 78.8 MiB
7: pamqp/header.py:81: 78.8 MiB
8: pamqp/frame.py:135: 57.4 MiB
9: pamqp/frame.py:157: 40.2 MiB
10: pamqp/frame.py:172: 40.2 MiB
11: pamqp/header.py:104: 20.1 MiB
12: pamqp/decode.py:157: 20.1 MiB
13: amqpstorm/channel.py:229: 18.9 MiB
14::525: 9.8 MiB
15: json/decoder.py:353: 4.6 MiB
This is after running for about 15-20 minutes and receiving maybe a million messages. Any idea why there is such a buildup of memory? It quickly exceeds a couple GB and we feel there is some odd issue happening.
kind regards