Closed vlapenkov closed 3 years ago
MassTransit already maintains a connection to the broker, so there is no setup/teardown time when publishing a message. I built a sample months ago to show the various performance characteristics of HTTP vs. RabbitMQ. I'm pretty sure I did a live Twitch stream on it, but never saved it to YouTube.
Thank you for fast response. But (https://github.com/phatboyg/TooFast) is used in request/response manner. In my question i need one way publish and i measured performance about it.
I don't have any suggestions for you.
Use the console benchmark tool and tweak the settings until you get the performance you need. If you can't get it high enough you need to look at your broker to understand why.
Achieved speed to about 800 messages/sec when used load balancer and 5 backends scaled horizontally
On my project on asp.net core what i need is to process about 700-1000 messages/sec. I've expected to install a balance loader (NGINX+ or any) and 3-5 backends, publishing messages to RabbitMq. Nginx successfully processes it's task, but when send on durable queue i can't overcome rate at 50-70 messages/sec per client when Publish or in this benchmark (you use send). Number of sent messages is about linearly proportional to number of clients. So i get about 10 times better performance when launch 10 times more clients, but I can't increase number of backends to more than 5. What could you suggest to overcome 50-70 messages/sec threshold? Maybe to keep open connection in pool? I looked at RabbitMq Client implementation and the most time consuming operation is creating connection and creating model. Any suggestions please