Matthias247 / jawampa

Web Application Messaging Protocol (WAMP v2) support for Java
Apache License 2.0
148 stars 56 forks source link

Control the thread pool's queue size #86

Open KMax opened 8 years ago

KMax commented 8 years ago

If we will look at the NettyWampClientConnectorProvider.createScheduler and will go deeper in the implementation, we will find that it uses SingleThreadEventExecutor. SingleThreadEventExecutor uses new LinkedBlockingQueue<Runnable>(); as a queue for the tasks submitted to be executed.

It means that the thread pool used to publish messages has an endless queue which means that if the client publishes too much message it may at some point in time due with the OutOfMemoryException.

I tried to override the NettyWampClientConnectorProvider.createScheduler method to use a thread pool with a fixed queue, but for unknown reason it wasn't able to even connect to the broker.

Are there any recommendation how to fix the size of the queue?

the20login commented 8 years ago

I don't think this is Jawampa issue, looks more like netty API problem. I mean, jawampa just use one of default EventLoopGroup implementations, if you need another implementation, you should ask in netty repo, not jawampa.

Also, did you really hit OOM, or this is theoretical bug? As far as I know, EventLoopGroup worker threads just listen on selector, so their number correlates mostly with connections count, not messages count. I mean, if you have only single TCP connection(like in jawampa client), you don't really need more than one thread on selector. If you have a lot of heavy messages, you could use intermediate processing thread pool, with any required properties.

Please correct me if I am wrong.

Matthias247 commented 8 years ago

Unfortunatly I think WAMP itself is not the best protocol for such considerations, because there is no application level flow control and due to some guarantees that WAMP implementations have to make (don't loose or reorder messages) the best thing we can do is queuing messages at several places.

Things that could be monitored from my point of view is the size of the outgoing message queue (implemented in QueuingConnectionController). But the only sensible reaction would be to close the connection once the queue size gets too big. For incoming messages there currently exist no means to provide backpressure. It could work by changing from a push to a pull model for receveing messages from the networking abstraction, but that would be a bigger change.

KMax commented 8 years ago

@the20login Yes, you right this is more question to Netty than jawampa. It wasn't a theoretical question, I actually was analysing the memory consumption of an app and found this.

@Matthias247, @the20login I already use a separate thread pool for processing message before publishing to the broker, but since the jawampa's thread pool queue size is not fixed, I probably need to add additional queue with fixed size before it to control the memory consumption.

@Matthias247 I also tried to override NettyWampClientConnectorProvider.createScheduler method to use a ThreadPoolExecutor with a fixed queue, but it didn't work for some reason. The client couldn't connect to the broker. I didn't find the reason, may be need more time to understand.