Polyconseil / aioamqp

AMQP implementation using asyncio
Other
280 stars 88 forks source link

Channel Connection Pool #54

Closed kracekumar closed 8 years ago

kracekumar commented 8 years ago

As per amqp docs it suggested to open one connection and have any number of channels open. There is a huge performance improvement for 1 connection - n channel pattern. It will be worth to maintain pool of amqp channels.

I am interested in sending a pull request. Please let me know if any more info is required.

dzen commented 8 years ago

Hello @kracekumar,

Since each channel is designed to one purpose, and is linked to a queue, an exchange … and is not as low as creating a huge resource (like a thread, or a new tcp connection) what is the benefit of a pool ?

A new channel is created on all call to channel(): https://github.com/Polyconseil/aioamqp/blob/master/examples/publisher.py#L20

kracekumar commented 8 years ago

Hi @dzen

Other implementations like kombu and Bunny have same concept.

I haven't benchmarked myself with and without pool, it was general suggestion since amqp request is sent when a channel is created every time.

smurfix commented 8 years ago

Frankly I don't see the problem.

If some messages take too much time to get processed, wrap it in asyncio.ensure_future(). Or delegate them to a subprocess. Or, if large messages are the problem you're trying to solve, create a new aioamqp instance for exchanging them.

dzen commented 8 years ago

@kracekumar what is your use case ? are you using asyncio in a webframework ? and then you're creating a new amqp channel on each incoming request ? If so I'm not sure if it's the right way to do it.

kracekumar commented 8 years ago

@dzen I am yet to start the new websocket project. I am benchmarking/reading between asyncio (aiohttp) and gevent (flask/uwsgi). Every incoming message will be enqueued to rabbitmq, so I am contemplating between opening channel for every message or fetch the channel from connection pool like SQLAlchemy. For 100 concurrent connection opening new channel vs fetching from connection pool (queue).

If this doesn't make sense, please close the issue.

smurfix commented 8 years ago

@kracekumar This may be a stupid question, but why would you need a separate channel per message in the first place? simply put the messages onto a queue (along with a future, to signal success to the http task) and start a worker task feed the data to rabbitmq. if that's not fast enough (it should be), start more workers to process the queue, each with its own rabbitmq connection, no pooling needed.

kracekumar commented 8 years ago

@smurfix That is an alternate architecture, by queue you meant Python asyncio queue. Thanks for that.

dzen commented 8 years ago

@kracekumar I'm closing this issue: imho I'll create one channel per queue / exchange and try to keep them open.