Closed zhuwenxing closed 1 year ago
/assign @jiaoew1991
/assign @weiliu1031 /unassign
for now, helm rolling upgrade doesn't support the rolling upgrade one by one in order, also it doesn't support the graceful stop process. so when you do rolling upgrade, no expected segment balanced, and when the old node down, it will cause the segment lack in shard, until the segment has been loaded in new query node, which may consist for tens of seconds.
if you wish the service available in rolling upgrade, please do rolling upgrade by operator.
It happened after the upgrading finished and wait 5 min to start tests
the upgrade task causes the segment lack. and then try to load this segment in query node.
but comes lots of load segment request, which cause the deadlock between load binlog and load segment, both of them share the same thread pool.
same as #25781
/cc @yah01 @MrPresent-Han we should cp the fix to 2.2 branch
fine, I will cp for this
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen
.
Is there an existing issue for this?
Environment
Current Behavior
Expected Behavior
No response
Steps To Reproduce
No response
Milvus Log
failed job:https://qa-jenkins.milvus.io/blue/organizations/jenkins/deploy_test_kafka_cron/detail/deploy_test_kafka_cron/1180/pipeline log: artifacts-kafka-cluster-upgrade-1180-server-second-deployment-logs.tar.gz artifacts-kafka-cluster-upgrade-1180-server-first-deployment-logs.tar.gz artifacts-kafka-cluster-upgrade-1180-pytest-logs.tar.gz
Anything else?
No response