alluxio.user.block.worker.client.pool.max is restricted to the 1024. When the file being written exceeds the size of 1024 blocks, all read and write operations from the client will be blocked.
I took a closer look at the code and found that each DataWriter holds a connection related to a BlockOutStream.
However, during the writing process of the large file, BlockOutStream is not released, but cached to mPreviousBlockOutStreams as the following code segment:
And it would not be released until the file is closed.
So, I'm curious why it's designed like this.
To Reproduce
You can write a large file greater than blockSize * 1024.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions.
Alluxio Version: Alluxio 2.4.0+
Describe the bug
alluxio.user.block.worker.client.pool.max
is restricted to the 1024. When the file being written exceeds the size of 1024 blocks, all read and write operations from the client will be blocked.I took a closer look at the code and found that each DataWriter holds a connection related to a
BlockOutStream
.However, during the writing process of the large file,
BlockOutStream
is not released, but cached tomPreviousBlockOutStreams
as the following code segment:And it would not be released until the file is closed.
So, I'm curious why it's designed like this.
To Reproduce
You can write a large file greater than
blockSize * 1024
.Expected behavior DeadLock for 100 days.
Urgency
Very urgent!