Open 6r1d opened 2 years ago
Thank you for your research, but I am sure that the testing methodology is not correct. The fact is that there is no publisher confirms out of the box in pika
. However, aio-pika
, and aiormq
which used as aio-pika
's backend, use reliable message publishing by default.
It means in the sending of the message stage does not end until the confirmation of delivery from the broker is received, it's might explaining your results I guess.
What I could advise is to conduct the test in equal conditions:
pika
(but I know it's not very easy)aio-pika
.Of course, publisher confirms is safe from the point of view of data integrity. However, as you have already seen, it is slower. In practice, users do not often bombard the broker with messages, so this behaviour is selected by default. And of course, you have an alternative, you can disable it.
aio-pika
(as well as aiormq
) were not created as a competitor to pika. aio-pika
solves the problem of concurrency in the first place. Second of all, the safety of concurrent code, from the point of view of the library user. And as you probably already noticed, a radically different interface.
First of all, thanks for the quick reply!
aio-pika (as well as aiormq) were not created as a competitor to pika
I understand that, I compared async and (probably) sync-only solution to get some practice.
And as you probably already noticed, a radically different interface.
Yes, I find it easier to use.
Thank you for your research, but I am sure that the testing methodology is not correct. Try to set up publisher confirms in pika (but I know it's not very easy) Turn off publisher confirms when creating the channel in aio-pika.
Thanks for explaining that part, at the very least I understand how to test both in a better way.
By the way, since I want to test aio-pika
more, are these changes valid to disable confirms?
# Before
channel = await connection.channel()
# Now
channel = await connection.channel(publisher_confirms=False)
# Before
await channel.default_exchange.publish(Message(paragraph.encode('utf-8')), routing_key=queue.name)
# Now
await channel.default_exchange.publish(
Message(paragraph.encode('utf-8')),
routing_key=queue.name,
mandatory=False
)
I've got about 1,436/s
messages published and delivered.
After short investigation I can speed up write performance up to 25% but the bottleneck is pamqp marshaller. So my fixes will be available soon in aiormq
.
Great to hear, thanks!
@mosquito, could you share your investigations? I'll try to implement it :)
@Olegt0rr it's already released you have to double check this
@6r1d , could you measure speed again and update your benchmark?
@mosquito it seems that the basic_publish
method of aiopika
waits for the the confirmation. Do you have any performance tuning tips in the case of publishing many thousands of messages?
One simple option would be to do an asyncio.gather
, but I suspect there will be some sweet spot in terms of batching.
Is it possible to publish using aiopika
and handle the confirms totally asynchronously? I.e., i would want to perform all of my publishes up front, and then handle the failures as they come in.
@mikeoconnor0308 just use transactions
@mikeoconnor0308 just use transactions
Thanks, I tried this and did see about 20% speed up. I also note that in simple tests where I try to publish 100k messages (with a transaction), pika
is about 2x faster than aiopika
.
Hello. I am experimenting with the speed of different RabbitMQ Python libraries and expected
aio-pika
to be slightly faster than a synchronous library, but instead got different results in Docker.I have used more or less default code with minimal changes to make it consistent between two versions I am comparing:
pika
andaio-pika
. I am not sure what to suspect: my delays (although those should not be the issue by asyncio's definition) or some code I'm writing incorrectly.Could anyone recommend me what to change to improve the performance in
aio-pika
version? Thanks.Note: I've set
AIOHTTP_NO_EXTENSIONS = 1
andYARL_NO_EXTENSIONS = 1
for a build, otherwise it was breaking. It may be related to the performance drop.