Closed qcz closed 8 years ago
two comments here:
UnpooledByteBufferAllocator
to PooledByteBufferAllocator
. See https://github.com/Azure/DotNetty/releases/tag/v0.3.0. Also to set unpooled allocator back, do this: https://github.com/Azure/azure-iot-sdks/blob/master/csharp/device/Microsoft.Azure.Devices.Client/Transport/Mqtt/MqttTransportHandler.cs#L429You were right, I was not setting the byte buffer allocator while constructing the ServerBootstrap. However it seems that the sample you linked is not working (or I am doing something wrong). My bootstrap is the following:
var bootstrap = new ServerBootstrap();
bootstrap
.Group(BossGroup, WorkerGroup)
.Channel<TcpServerSocketChannel>()
.Option(ChannelOption.Allocator, UnpooledByteBufferAllocator.Default)
.ChildHandler(new ActionChannelInitializer<ISocketChannel>(channel =>
{
IChannelPipeline pipeline = channel.Pipeline;
pipeline.AddLast(new MyDecoder());
}));
BootstrapChannel = await bootstrap.BindAsync(Configuration.Port);
The Allocator property of IChannelHandlerContext
s is still PooledByteBufferAllocator
and I see the same memory patterns as before.
I've added a release for the byteBuffer received in ChannelRead, still no changes:
IByteBuffer buffer = message as IByteBuffer;
if (buffer != null)
{
// Read and stuff
buffer.Release();
}
I've tried to use the PooledByteBufferAllocator
for sending messages to the clients but again, nothing changed (however I've not found any samples which use PooledByteBufferAllocator, just Unpooled.WrappedBuffer
, so it is just a guess):
var buffer = context.Allocator.Buffer(payload.Length, payload.Length);
buffer.WriteBytes(sentPacket);
context.WriteAndFlushAsync(buffer);
// this was before:
//context.WriteAndFlushAsync(Unpooled.WrappedBuffer(payload, 0, payload.Length));
Can you direct me to any source code or material where I can find out more about the usage of this?
oh, if this is on server you'd need to do .ChildOption(..)
instead of .Option(..)
. Then again, if you want to have a hi-perf server setup you'd probably want to go with pooled allocator and review your code to make sure buffers are released once they're not used anymore.
it's hard to advise on buffer lifetime management without seeing all the code that touches buffers. Pls check DotNetty logs to see if there are entries on resource leaking. See Echo example on how to config log dumping with semantic logging: https://github.com/Azure/DotNetty/blob/dev/examples/Echo.Client/Program.cs#L23.
Thank you! ChildOption
worked like a charm. Memory usage is 30 Mb again :+1:
And thanks for the tip on the diagnostic log, it will definitely come in handy when I'll try to find the source of the leak while using PooledByteBufferAllocator.
After updating to DotNetty 0.3.1 the memory usage has grown to almost 10x for the same workload. Before updating to 0.3.1 we used version 0.2.6.
The workload is the following:
With 0.2.6 the memory usage was steady 30 Mb:
With 0.3.1 the memory usage jumps to 100-110 Mb right after the first batch of clients connect and finally goes up to 250-300 Mb during the talk phase (CPU intensive part on the graph). Furthermore the graph is full of GCs:
Memory snapshot shows that all this space is allocated for HeapArena: This is not GCd even after the clients are disconnected and a long time has elapsed.
(Tried to use v0.3.0, but it was worse: the memory usage has gone up to 7-800 Mb and it started to throw OutOfMemoryExceptions)
Additional Info:
WriteAndFlushAsync(Unpooled.WrappedBuffer(packet, 0, packet.Length));
is used to write messages to the clients