Open absolute8511 opened 1 year ago
🤔 I think maybe you can split these data to another broker instance.
Thanks for your idea,I'm very interested in it. (I'm not the contributor yet,🐒,But I will try it locally.)
This issue is stale because it has been open for 365 days with no activity. It will be closed in 3 days if no further activity occurs.
Before Creating the Enhancement Request
Summary
Currently, the broker will open all the files include commitlog and consume queue as the mmap. While reading and writing the mmap file, the PTE will be increased (each page will use 8 bytes memory in kernel). If the total data size on disk increased to 4TB, the memory used by PTE will be 8GB which is too much in a normal pod and may cause the oom issue. I think we should control the memory used by broker. below is a example for 1TB data size
Motivation
As most broker deployed in the cloud will be in a pod which have less memory. For large data size, most data should be old enough to ignore the latency for reading, it is not necessary to keep all files opened as mmap which make the memory out of control.
Describe the Solution You'd Like
Maybe only open the several newest data files and close the old files, only reopen for reading if request.
Describe Alternatives You've Considered
none
Additional Context
No response