Open zhuwenxing opened 15 hours ago
/assign @xiaocai2333 /unassign
after offline discussion with @foxspy , it looks like an issue about index engine version /assign @foxspy /unassign @xiaocai2333
@congqixia what about datanode memory? in @weiliu1031 opinion, it was caused by the large number of collections
Is there an existing issue for this?
Environment
Current Behavior
The time point when the memory usage surged drastically coincided with the time when mixcoord started to upgrade.
Logs when the query node crashes
Expected Behavior
No response
Steps To Reproduce
No response
Milvus Log
failed job: https://qa-jenkins.milvus.io/blue/organizations/jenkins/rolling_update_for_operator_test_simple/detail/rolling_update_for_operator_test_simple/5485/pipeline log: artifacts-kafka-mixcoord-5485-server-logs.tar.gz
cluster: 4am ns: chaos-testing pod info
Anything else?
it is a stable reproduced issue