Open ThreadDao opened 4 months ago
With so many partitions, we might need to change concurrency of compaction and more datanodes. Currently I think if we can dd more datanodes and catchup the compaction then it work for us
Even though there're 50K segment, the thing is why 2 * 64G querynode cannot hold 7GB data in memory.
/assign @ThreadDao /unassign
Is this reproducing?
can we reproduce this still? I thought this might due to flush can not catch up and we need to improve flush performance
Is there an existing issue for this?
Environment
Current Behavior
deploy milvus with config
test steps
queryNode oomkilled
The qn oomkilled after about two minutes of concurrent requests, around at 2024-06-21 03:40:52
grafana: metrics of compact-opt-flush3
prroscope: alloc_objects of compact-opt-flush3-milvus-querynode
Expected Behavior
No response
Steps To Reproduce
No response
Milvus Log