Closed planestepper closed 5 years ago
Issue was the clients were spread apart different threads, although they came from the same process. So the proxy would treat each as a new connection, reasonably. The new batch design for the same processing takes away the multiple connections and also removes the needs and benefits of using this proxy.
My use-case consists of openresty responding with
empty_gif
and then pushing the necessary data into a queue in RabbitMQ. Using the AMQP protocol directly, RabbitMQ was able to cope with the number of messages, ingesting at a rate of about 300-400 messages per second. When I installed and configured AMQProxy, that number just skyrocketed, reaching over 600msgs/s.At one point RabbitMQ just stopped ingesting, and the management interface in CloudAMQP became unavailable and displayed an error message (see below).
I do see the maxConnections setting in the
example.ini
file, although I don't see mentions of said upper limit in the codebase, nor documentation on how to set the maximum number of connections to hold open. When I checked the management interface, there were over 6000 connections open, with many of them with a status ofblocking
orblocked
. The instance is a Big Bunny.The load test is being run with Apache AB,
ab -k -c200 -n100000 -r 'http://<some address>/1234567890.gif'
. Openresty (nginx) was running with a single worker allowing a maximum of 400 worker connections (which I believed would limit concurrency, and therefore connections). The publisher is usingbasic_publish
, and consumers for this particular queue are in a different server. Successfully published messages are also successfully processed.We have other pieces of software using CloudAMQP, and AMQProxy will be very helpful in case we are able to get this prototype right.
Server log sample:
One can see the spikes in connections:
and how they seem unrelated to the number of messages in the queues, over that time period: