Closed jnarowski closed 4 years ago
Hutch, Bunny and RabbitMQ absolutely do not have this limitation. Multiple consumers on a shared queue become competing consumers. Assuming there are several pods running Hutch consumers and enough messages ready for delivery, you will get parallel (uncoordinated) processing between them.
Please see tutorial 2 and these RabbitMQ guides: Queues, Consumers, Concepts Overview.
Not sure if this helps, but might be useful for context.
In the past (please see context below), we had to configure the bit in tutorial 2 about "Fair dispatch", and we ended up setting channel_prefetch
to 16. The trade-off of having queue sizes grow during spikes was acceptable to us.
We were working with "heavy" (long-running) consumers, and usually one of the consumers ended up too busy, which sounds similar to your situation.
Thanks for the feedback. We'll dig in now.
The prefetch value has throughput and (client side) concurrency effects. See Consumer Acknowledgement Modes, Prefetch and Throughput and this old but still relevant blog post.
For a project such as Hutch, the value of 1
is not great but most predictable (no unexpected natural race conditions between consumers). In the end some projects such as Spring AMQP concluded that the default of 1
was a massive net negative and changed the default to something like 128
(the exact value is not very important). We'd consider doing the same for Hutch if necessary.
I launched hutch on our kuberneties cluster, and it looks like only one of the containers is picking up jobs. Is hutch limited to a single topic subscription for consuming messages, or is this just a config issue on my side?
I didn't see any threading or worker option config aside from consumer_pool_size