Azure / DotNetty

DotNetty project – a port of netty, event-driven asynchronous network application framework
Other
4.09k stars 977 forks source link

Huge Memory Usage - DotNetty.Buffers.HeapArena #174

Closed aconite33 closed 7 years ago

aconite33 commented 7 years ago

When transferring large amount of data (e.g., sending a file), I'm seeing that DotNetty consumes a huge amount of memory. After the data is sent, the buffers don't seem to release the memory, and I'm left with an application that has a huge memory presence.

Take a snapshot of my program after I've transferred a file results in the screenshot below. Is this an issue with DotNetty or am I doing something where I'm not releasing resources appropriately? screen shot 2016-11-16 at 8 54 53 am

nayato commented 7 years ago

There is a number of parameters that affect pooled buffer allocator behavior. Basically, there is a number of knobs you can tweak: number of heap arenas, size of arena (controlled by DEFAULT_MAX_ORDER and DEFAULT_PAGE_SIZE). Try adjusting these to lower max memory consumption. Easier option would be to switch to UnpooledByteBufferAllocator. It is a natural choice for client applications but I wouldn't recommend that if you have high perf scenarios. Another option is to request buffers that won't be cached by definition (due to big size) if you can predict such situation.

aconite33 commented 7 years ago

Awesome. Thanks Nayato. I'm trying to find some examples of adjusting those tweaks. Do you have any references that could prove a path forward? Also I changed it over to the UnpooledbyteBufferAllocator: bootstrap.Option(ChannelOption.Allocator, UnpooledByteBufferAllocator.Default); But still is exploding my memory. I also notice that when I do WriteandFlushAsync it doesn't immediately send. Seems to take a bit before sending (In my case, reading the entire file before sending it, instead of in chunks)

**Another edit. I am comparing the two memory snapshots, and the Unpooled is way lower, but still pretty high up there. Seems after multiple sending of files, it does recycle this memory, but after the sending of the file I still have a high amount of data in the HeapArena still (16 Million bytes)

nayato commented 7 years ago

Closing. Feel free to reopen if need be.

Joooooooooogi commented 7 years ago

I'm currently struggeling with the same issues as described by aconite33.

I set up a test tcp server with basic handlers and a test client sending a few thousand messages of different sizes (around 100-500kb each). After a short time i run in an out of memory exception which is thrown at "DotNetty.Buffers.HeapArena.NewChunk(Int32 pageSize, Int32 maxOrder, Int32 pageShifts, Int32 chunkSize)"

I also changed it over to the UnpooledbyteBufferAllocator, but it is always the same behaviour. What was the solution in this case?

nayato commented 7 years ago

@Joooooooooogi one option is that you're genuinely exhausting memory. Are you releasing buffers? are those buffers sitting somewhere in a queue (e.g. channel's outbound buffer)? best bet is to take a process dump and trace where buffers are referenced to understand where ultimately memory is not freed. I can easily get OOM exception if I try to send 600K messages 8KB each without throttling the sending part, i.e. all the messages get queued up for sending which lags behind and ultimately buffers sit in channel's outbound buffer.

Joooooooooogi commented 7 years ago

@nayato Thanks for your fast response - I think the high frequency of messages was also one of the issues I ran into. Using the unpooled buffer and slowing down the sending part, the memory consumption seems to be stable. I think that the out of memory exception is also caused by the multiple threads is use, which are all accessing the same channel via asyncWriteFlush. It seems like the server-side sometimes gets kind of "out-of-sync" ... in this case my ReplayingDecoder stucks at one of the decoding states and memory fills up. I allready put a lock on the client's write to channel method, but I think sometimes things get messed up right here.

Will I Need to implement my own kind of protocol which provides an ACK Signal to the client to continue with sending the next message? or is this in some way handled inside of netty?