Open kptfh opened 4 years ago
The buffer copy to byte[] was implemented for these reasons:
1) Parsing byte[] responses can be the same for both Netty and NIO. ByteBuf and ByteBuffer are two completely separate classes.
2) Parsing byte[] is faster than parsing ByteBuffer. I have not benchmarked ByteBuf with byte[] parsing, but I assume there is a difference here too.
Hi @BrianNichols ! My issue not about parsing/reading response but about writing/serializing async command. First the command serialized into byte array Then 1) For Netty you create ByteBuf and copy byte array into byte buffer but there is an option to wrap existing byte[] array with io.netty.buffer.Unpooled.wrappedBuffer() to avoid copying 2) For Nio you copy byte array into ByteBuffer but there is also an option to wrap byte array with ByteBuffer
The option to wrap a byte[] in a ByteBuffer only exists for a non-direct heap ByteBuffer in NIO. The client uses direct ByteBuffers because they use contiguous memory that can't be moved by the jvm. This is an important performance consideration for TCP stacks that are embedded in the OS kernel.
If a non-direct heap ByteBuffer were used, the jvm would likely copy the data from it's heap to a contiguous memory location before applying the socket write. Ultimately, it's a question of whether this copy should be explicit or not.
Netty also has the concept of heap (newHeapBuffer()) vs direct (newDirectBuffer()) byte buffers. I suspect the same principal applies in this case.
If you have client benchmarks that indicate wrapped byte buffers are faster for socket operations, I would be willing to modify the current implementation.
Here is the two points where byte[] get copied: 1) NettyCommand.writeByteBuffer Avoid byteBuffer.writeBytes(command.dataBuffer, 0, command.dataOffset); Try to wrap command.dataBuffer with io.netty.buffer.Unpooled.wrappedBuffer()
2) NioCommand.writeCommand Avoid usage of byteBuffer.put(command.dataBuffer, 0, command.dataOffset); Try to use command.dataBuffer directly instead