Open wangting0128 opened 1 week ago
I think this is as expected if you have many concurrent read and the response is large.
so far there is nothing we can handle this
I don't think this is an important case to handle, let's keep this as is
I think this is as expected if you have many concurrent read and the response is large.
so far there is nothing we can handle this
Got it! closed it now
let's keep it to leave to @zhagnlu @MrPresent-Han
I don't think of a easy way to fix that. maybe changing the reduce function can solve the problem. Any thoughts?
anyway this is not a critical issue
/assign @zhagnlu /unassign
let's keep it to leave to @zhagnlu @MrPresent-Han
I don't think of a easy way to fix that. maybe changing the reduce function can solve the problem. Any thoughts?
Yes, I think if reduce can be done in multi-segment of the same querynode, may we can spill temp result to file to solve oom?
let's keep it to leave to @zhagnlu @MrPresent-Han I don't think of a easy way to fix that. maybe changing the reduce function can solve the problem. Any thoughts?
Yes, I think if reduce can be done in multi-segment of the same querynode, may we can spill temp result to file to solve oom?
I think the problem is about streaming reducing on proxies.
if we get 100 querynodes, each proxy receive 1000topk result, that is still gonna to be huge if each row is 1KB
Is there an existing issue for this?
Environment
Current Behavior
server:
describe pod
pod monitor: queryNode
proxy
client log:
Expected Behavior
No response
Steps To Reproduce
Milvus Log
No response
Anything else?
No response