milvus-io / milvus

A cloud-native vector database, storage for next generation AI applications
https://milvus.io
Apache License 2.0
30.91k stars 2.95k forks source link

[Bug]: [benchmark][multi-replicas-loadbalance] proxy and queryNode disconnected, causing query and search request to fail #25917

Closed wangting0128 closed 1 year ago

wangting0128 commented 1 year ago

Is there an existing issue for this?

Environment

- Milvus version:master-20230725-a6808e64
- Deployment mode(standalone or cluster):cluster
- MQ type(rocksmq, pulsar or kafka):pulsar    
- SDK version(e.g. pymilvus v2.0.0rc2):2.4.0.dev73
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

server argo task: fouramf-4qh2m clients argo task: fouramf-concurrent-kdjxx

server:

NAME                                                              READY   STATUS             RESTARTS         AGE     IP              NODE         NOMINATED NODE   READINESS GATES
lb-helm-hnsw-scene-high-etcd-0                                    1/1     Running            0                 21h     10.104.15.206   4am-node20   <none>           <none>
lb-helm-hnsw-scene-high-etcd-1                                    1/1     Running            0                 21h     10.104.9.145    4am-node14   <none>           <none>
lb-helm-hnsw-scene-high-etcd-2                                    1/1     Running            0                 21h     10.104.6.45     4am-node13   <none>           <none>
lb-helm-hnsw-scene-high-milvus-datacoord-7c64c68f4-nrrmk          1/1     Running            0                 21h     10.104.15.203   4am-node20   <none>           <none>
lb-helm-hnsw-scene-high-milvus-datanode-5765dc5f7-hllbz           1/1     Running            1 (21h ago)       21h     10.104.16.139   4am-node21   <none>           <none>
lb-helm-hnsw-scene-high-milvus-datanode-5765dc5f7-mkjjp           1/1     Running            1 (21h ago)       21h     10.104.17.103   4am-node23   <none>           <none>
lb-helm-hnsw-scene-high-milvus-indexcoord-9cfb49897-zc98w         1/1     Running            0                 21h     10.104.17.101   4am-node23   <none>           <none>
lb-helm-hnsw-scene-high-milvus-indexnode-86d9df4699-lx6b8         1/1     Running            0                 21h     10.104.12.74    4am-node17   <none>           <none>
lb-helm-hnsw-scene-high-milvus-indexnode-86d9df4699-txdwm         1/1     Running            0                 21h     10.104.14.152   4am-node18   <none>           <none>
lb-helm-hnsw-scene-high-milvus-proxy-758549dbf5-h9m67             1/1     Running            1 (21h ago)       21h     10.104.17.102   4am-node23   <none>           <none>
lb-helm-hnsw-scene-high-milvus-querycoord-7cd94bc95d-s46b4        1/1     Running            1 (21h ago)       21h     10.104.15.204   4am-node20   <none>           <none>
lb-helm-hnsw-scene-high-milvus-querynode-7f9d45854-66jzh          1/1     Running            0                 21h     10.104.15.205   4am-node20   <none>           <none>
lb-helm-hnsw-scene-high-milvus-querynode-7f9d45854-btm6g          1/1     Running            0                 21h     10.104.9.138    4am-node14   <none>           <none>
lb-helm-hnsw-scene-high-milvus-rootcoord-f84c77b9d-wg924          1/1     Running            1 (21h ago)       21h     10.104.5.184    4am-node12   <none>           <none>
lb-helm-hnsw-scene-high-minio-0                                   1/1     Running            0                 21h     10.104.15.208   4am-node20   <none>           <none>
lb-helm-hnsw-scene-high-minio-1                                   1/1     Running            0                 21h     10.104.17.105   4am-node23   <none>           <none>
lb-helm-hnsw-scene-high-minio-2                                   1/1     Running            0                 21h     10.104.6.44     4am-node13   <none>           <none>
lb-helm-hnsw-scene-high-minio-3                                   1/1     Running            0                 21h     10.104.23.169   4am-node27   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-bookie-0                           1/1     Running            0                 21h     10.104.9.141    4am-node14   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-bookie-1                           1/1     Running            0                 21h     10.104.5.189    4am-node12   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-bookie-2                           1/1     Running            0                 21h     10.104.23.170   4am-node27   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-bookie-init-qb2tp                  0/1     Completed          0                 21h     10.104.15.199   4am-node20   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-broker-0                           1/1     Running            0                 21h     10.104.5.183    4am-node12   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-proxy-0                            1/1     Running            0                 21h     10.104.14.151   4am-node18   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-pulsar-init-ck6xj                  0/1     Completed          0                 21h     10.104.15.200   4am-node20   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-recovery-0                         1/1     Running            0                 21h     10.104.9.137    4am-node14   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-zookeeper-0                        1/1     Running            0                 21h     10.104.5.186    4am-node12   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-zookeeper-1                        1/1     Running            2 (17m ago)       21h     10.104.13.124   4am-node16   <none>           <none>
lb-helm-hnsw-scene-high-pulsar-zookeeper-2                        1/1     Running            0                 21h     10.104.6.52     4am-node13   <none>           <none>

proxy logs:

2023-07-25 07:19:37 | [2023/07/25 07:19:37.485 +00:00] [WARN] [conc/pool.go:73] ["get component status failed,set node unreachable"] [node=2] [error="context deadline exceeded"]
-- | --
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.571 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=3eb4147b5b3526c05f6ccfef47020e2d] [nodeID=2]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.572 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=3eb4147b5b3526c05f6ccfef47020e2d] [nodeID=2]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.572 +00:00] [WARN] [proxy/lb_policy.go:130] ["failed to select shard"] [collectionName=fouram_sGclgNu2] [channelName=by-dev-rootcoord-dml_0_443093086611965509v0] [availableNodes="[2]"] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.572 +00:00] [WARN] [proxy/lb_policy.go:151] ["failed to select node for shard"] [traceID=3eb4147b5b3526c05f6ccfef47020e2d] [collectionName=fouram_sGclgNu2] [channelName=by-dev-rootcoord-dml_0_443093086611965509v0] [nodeID=-1] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.572 +00:00] [ERROR] [retry/retry.go:42] ["retry func failed"] [traceID=3eb4147b5b3526c05f6ccfef47020e2d] ["retry time"=0] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"] [stack="github.com/milvus-io/milvus/pkg/util/retry.Do\n\t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:42\ngithub.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\ngithub.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.667 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=0a7b3cfa217d58f9d62cd0631d1a66bc] [nodeID=2]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.667 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=0a7b3cfa217d58f9d62cd0631d1a66bc] [nodeID=2]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.667 +00:00] [WARN] [proxy/lb_policy.go:130] ["failed to select shard"] [collectionName=fouram_2UsEVoTQ] [channelName=by-dev-rootcoord-dml_5_443093086612165553v1] [availableNodes="[2]"] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.667 +00:00] [WARN] [proxy/lb_policy.go:151] ["failed to select node for shard"] [traceID=0a7b3cfa217d58f9d62cd0631d1a66bc] [collectionName=fouram_2UsEVoTQ] [channelName=by-dev-rootcoord-dml_5_443093086612165553v1] [nodeID=-1] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.667 +00:00] [ERROR] [retry/retry.go:42] ["retry func failed"] [traceID=0a7b3cfa217d58f9d62cd0631d1a66bc] ["retry time"=0] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"] [stack="github.com/milvus-io/milvus/pkg/util/retry.Do\n\t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:42\ngithub.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\ngithub.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.773 +00:00] [WARN] [proxy/task_query.go:426] ["fail to execute query"] [traceID=3eb4147b5b3526c05f6ccfef47020e2d] [collection=443093086611965509] [partitionIDs="[]"] [requestType=query] [error="attempt #0: all available nodes are unreachable: service unavailable"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.773 +00:00] [WARN] [proxy/task_scheduler.go:460] ["Failed to execute task: "] [traceID=3eb4147b5b3526c05f6ccfef47020e2d] [error="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders"] [errorVerbose="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrShardDelegatorQueryFailed\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:497\n  \| github.com/milvus-io/milvus/internal/proxy.(*queryTask).Execute\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_query.go:427\n  \| github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:457\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) attempt #0: all available nodes are unreachable: service unavailable\nWraps: (3) fail to query on all shard leaders\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.773 +00:00] [WARN] [proxy/impl.go:2891] ["Query failed to WaitToFinish"] [traceID=3eb4147b5b3526c05f6ccfef47020e2d] [role=proxy] [db=default] [collection=fouram_sGclgNu2] [partitions="[]"] [error="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders"] [errorVerbose="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrShardDelegatorQueryFailed\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:497\n  \| github.com/milvus-io/milvus/internal/proxy.(*queryTask).Execute\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_query.go:427\n  \| github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:457\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) attempt #0: all available nodes are unreachable: service unavailable\nWraps: (3) fail to query on all shard leaders\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.869 +00:00] [WARN] [proxy/task_query.go:426] ["fail to execute query"] [traceID=0a7b3cfa217d58f9d62cd0631d1a66bc] [collection=443093086612165553] [partitionIDs="[]"] [requestType=query] [error="attempt #0: all available nodes are unreachable: service unavailable"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.869 +00:00] [WARN] [proxy/task_scheduler.go:460] ["Failed to execute task: "] [traceID=0a7b3cfa217d58f9d62cd0631d1a66bc] [error="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders"] [errorVerbose="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrShardDelegatorQueryFailed\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:497\n  \| github.com/milvus-io/milvus/internal/proxy.(*queryTask).Execute\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_query.go:427\n  \| github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:457\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) attempt #0: all available nodes are unreachable: service unavailable\nWraps: (3) fail to query on all shard leaders\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.869 +00:00] [WARN] [proxy/impl.go:2891] ["Query failed to WaitToFinish"] [traceID=0a7b3cfa217d58f9d62cd0631d1a66bc] [role=proxy] [db=default] [collection=fouram_2UsEVoTQ] [partitions="[]"] [error="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders"] [errorVerbose="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrShardDelegatorQueryFailed\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:497\n  \| github.com/milvus-io/milvus/internal/proxy.(*queryTask).Execute\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_query.go:427\n  \| github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:457\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) attempt #0: all available nodes are unreachable: service unavailable\nWraps: (3) fail to query on all shard leaders\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.873 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=edb0db9e1d4ee3f6bdb9a1882e2f01d0] [nodeID=2]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.874 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=edb0db9e1d4ee3f6bdb9a1882e2f01d0] [nodeID=2]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.874 +00:00] [WARN] [proxy/lb_policy.go:130] ["failed to select shard"] [collectionName=fouram_2UsEVoTQ] [channelName=by-dev-rootcoord-dml_5_443093086612165553v1] [availableNodes="[2]"] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.874 +00:00] [WARN] [proxy/lb_policy.go:151] ["failed to select node for shard"] [traceID=edb0db9e1d4ee3f6bdb9a1882e2f01d0] [collectionName=fouram_2UsEVoTQ] [channelName=by-dev-rootcoord-dml_5_443093086612165553v1] [nodeID=-1] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.874 +00:00] [ERROR] [retry/retry.go:42] ["retry func failed"] [traceID=edb0db9e1d4ee3f6bdb9a1882e2f01d0] ["retry time"=0] [error="all available nodes are unreachable: service unavailable"] [errorVerbose="all available nodes are unreachable: service unavailable\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrServiceUnavailable\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:139\n  \| github.com/milvus-io/milvus/internal/proxy.(*LookAsideBalancer).SelectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/look_aside_balancer.go:115\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).selectNode\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:128\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry.func1\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:149\n  \| github.com/milvus-io/milvus/pkg/util/retry.Do\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:40\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\n  \| github.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\n  \| golang.org/x/sync/errgroup.(*Group).Go.func1\n  \| \t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) all available nodes are unreachable\nWraps: (3) service unavailable\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"] [stack="github.com/milvus-io/milvus/pkg/util/retry.Do\n\t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:42\ngithub.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).ExecuteWithRetry\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:148\ngithub.com/milvus-io/milvus/internal/proxy.(*LBPolicyImpl).Execute.func2\n\t/go/src/github.com/milvus-io/milvus/internal/proxy/lb_policy.go:199\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.942 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=d7321656b7b8af5fd58791bb9a4edf55] [nodeID=2]
  |   | 2023-07-25 07:19:37 | [2023/07/25 07:19:37.942 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=d7321656b7b8af5fd58791bb9a4edf55] [nodeID=2]
  |   | 2023-07-25 07:19:38 | [2023/07/25 07:19:38.075 +00:00] [WARN] [proxy/task_query.go:426] ["fail to execute query"] [traceID=edb0db9e1d4ee3f6bdb9a1882e2f01d0] [collection=443093086612165553] [partitionIDs="[]"] [requestType=query] [error="attempt #0: all available nodes are unreachable: service unavailable"]
  |   | 2023-07-25 07:19:38 | [2023/07/25 07:19:38.075 +00:00] [WARN] [proxy/task_scheduler.go:460] ["Failed to execute task: "] [traceID=edb0db9e1d4ee3f6bdb9a1882e2f01d0] [error="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders"] [errorVerbose="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrShardDelegatorQueryFailed\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:497\n  \| github.com/milvus-io/milvus/internal/proxy.(*queryTask).Execute\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_query.go:427\n  \| github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:457\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) attempt #0: all available nodes are unreachable: service unavailable\nWraps: (3) fail to query on all shard leaders\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:38 | [2023/07/25 07:19:38.075 +00:00] [WARN] [proxy/impl.go:2891] ["Query failed to WaitToFinish"] [traceID=edb0db9e1d4ee3f6bdb9a1882e2f01d0] [role=proxy] [db=default] [collection=fouram_2UsEVoTQ] [partitions="[]"] [error="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders"] [errorVerbose="attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders\n(1) attached stack trace\n  -- stack trace:\n  \| github.com/milvus-io/milvus/pkg/util/merr.WrapErrShardDelegatorQueryFailed\n  \| \t/go/src/github.com/milvus-io/milvus/pkg/util/merr/utils.go:497\n  \| github.com/milvus-io/milvus/internal/proxy.(*queryTask).Execute\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_query.go:427\n  \| github.com/milvus-io/milvus/internal/proxy.(*taskScheduler).processTask\n  \| \t/go/src/github.com/milvus-io/milvus/internal/proxy/task_scheduler.go:457\n  \| runtime.goexit\n  \| \t/usr/local/go/src/runtime/asm_amd64.s:1571\nWraps: (2) attempt #0: all available nodes are unreachable: service unavailable\nWraps: (3) fail to query on all shard leaders\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) merr.milvusError"]
  |   | 2023-07-25 07:19:38 | [2023/07/25 07:19:38.392 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=e36f63c7ea6c5df5c6c03a71c4181d84] [nodeID=2]
  |   | 2023-07-25 07:19:38 | [2023/07/25 07:19:38.392 +00:00] [WARN] [proxy/look_aside_balancer.go:93] ["query node  is unreachable, skip it"] [traceID=e36f63c7ea6c5df5c6c03a71c4181d84] [nodeID=2]
  |   | 2023-07-25 08:28:01 | [2023/07/25 08:28:01.986 +00:00] [WARN] [conc/pool.go:73] ["get component status failed,set node unreachable"] [node=2] [error="context deadline exceeded"]

clients search error: clients.log

2023-07-25 07:19:37 | [2023-07-25 07:19:37,869 - ERROR - fouram]: RPC error: [query], <MilvusException: (code=1, message=attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders)>, <Time:{'RPC start': '2023-07-25 07:19:37.665762', 'RPC error': '2023-07-25 07:19:37.869309'}> (decorators.py:108)
-- | --
  |   | 2023-07-25 07:19:38 | [2023-07-25 07:19:38,075 - ERROR - fouram]: RPC error: [query], <MilvusException: (code=1, message=attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders)>, <Time:{'RPC start': '2023-07-25 07:19:37.871266', 'RPC error': '2023-07-25 07:19:38.075366'}> (decorators.py:108)
  |   | 2023-07-25 07:19:46 | [2023-07-25 07:19:46,959 - ERROR - fouram]: RPC error: [query], <MilvusException: (code=1, message=attempt #0: all available nodes are unreachable: service unavailable: fail to query on all shard leaders)>, <Time:{'RPC start': '2023-07-25 07:19:37.568227', 'RPC error': '2023-07-25 07:19:46.959679'}> (decorators.py:108)
  |   | 2023-07-25 08:28:03 | [2023-07-25 08:28:03,611 - ERROR - fouram]: RPC error: [search], <MilvusException: (code=1, message=attempt #0: all available nodes are unreachable: service unavailable: fail to search on all shard leaders)>, <Time:{'RPC start': '2023-07-25 08:28:03.122731', 'RPC error': '2023-07-25 08:28:03.611751'}> (decorators.py:108)
  |   | 2023-07-25 08:28:07 | [2023-07-25 08:28:07,128 - ERROR - fouram]: RPC error: [search], <MilvusException: (code=1, message=attempt #0: all available nodes are unreachable: service unavailable: fail to search on all shard leaders)>, <Time:{'RPC start': '2023-07-25 08:28:02.213243', 'RPC error': '2023-07-25 08:28:07.128294'}> (decorators.py:108)
  |   | 2023-07-25 08:28:07 | [2023-07-25 08:28:07,677 - ERROR - fouram]: RPC error: [search], <MilvusException: (code=1, message=attempt #0: all available nodes are unreachable: service unavailable: fail to search on all shard leaders)>, <Time:{'RPC start': '2023-07-25 08:28:05.928132', 'RPC error': '2023-07-25 08:28:07.677606'}> (decorators.py:108)
  |   | 2023-07-25 08:28:08 | [2023-07-25 08:28:08,414 - ERROR - fouram]: RPC error: [search], <MilvusException: (code=1, message=attempt #0: all available nodes are unreachable: service unavailable: fail to search on all shard leaders)>, <Time:{'RPC start': '2023-07-25 08:28:06.324470', 'RPC error': '2023-07-25 08:28:08.414210'}> (decorators.py:108)

Expected Behavior

No response

Steps To Reproduce

1、deploy cluster Milvus with 2 queryNodes
2、concurrent 10 clients which have 2 types: replica=1 and replica=2; each type has 5 clients
   a. create a collection with shard_num=2
   b. insert 5m data, build HNSW index
   c. load with replica=1 or 2  <- raise error
   d. concurrent query, search, scene_search_test by locust <- raise error

scene_search_test:
1. create a collection with dim=128, shards_num=2
2. insert 3000 data
3. flush collection
4. count entities of collection
5. build IVF_SQ8 index with nlist=2048
6. load collection with replica=1 or 2
7. search collection once with nprobe=1028, nq=1, topk=10
8. drop collection

Milvus Log

No response

Anything else?

fouramf-server-lb-2qn-2dn-large:

    queryNode:
      resources:
        limits:
          cpu: '50.0'
          memory: 100Gi
        requests:
          cpu: '25.0'
          memory: 50Gi
      replicas: 2
    indexNode:
      resources:
        limits:
          cpu: '16.0'
          memory: 16Gi
        requests:
          cpu: '5.0'
          memory: 5Gi
      replicas: 2
    dataNode:
      replicas: 2
      resources:
        limits:
          cpu: '2.0'
          memory: 16Gi
        requests:
          cpu: '2.0'
          memory: 2Gi

fouramf-client-sift-hnsw-replica2-shard2-search-query-scene-high:

    load_params:
      replica_number: 2
    collection_params:
      shards_num: 2
    dataset_params:
      dim: 128
      dataset_name: sift
      dataset_size: 5m
      ni_per: 50000
      metric_type: L2
    index_params:
      index_type: HNSW
      index_param:
        M: 8
        efConstruction: 200
    concurrent_params:
      concurrent_number: 100
      during_time: 12h
      interval: 20
    concurrent_tasks:
      - type: query
        weight: 1
        params:
          expr: ''
          random_data: true
          random_count: 10
          random_range: [0, 500000]
      - type: search
        weight: 1
        params:
          nq: 10000
          top_k: 10
          search_param:
            ef: 64
          timeout: 60
          random_data: true
      - type: scene_search_test
        weight: 1
        params:
          shards_num: 2
          replica_number: 2

fouramf-client-sift-hnsw-replica1-shard2-search-query-scene-high:

    load_params:
      replica_number: 1
    collection_params:
      shards_num: 2
    dataset_params:
      dim: 128
      dataset_name: sift
      dataset_size: 5m
      ni_per: 50000
      metric_type: L2
    index_params:
      index_type: HNSW
      index_param:
        M: 8
        efConstruction: 200
    concurrent_params:
      concurrent_number: 100
      during_time: 12h
      interval: 20
    concurrent_tasks:
      - type: query
        weight: 1
        params:
          expr: ''
          random_data: true
          random_count: 10
          random_range: [0, 500000]
      - type: search
        weight: 1
        params:
          nq: 10000
          top_k: 10
          search_param:
            ef: 64
          timeout: 60
          random_data: true
      - type: scene_search_test
        weight: 1
        params:
          shards_num: 2
          replica_number: 1
weiliu1031 commented 1 year ago

should be fixed by #26043, please verify this

weiliu1031 commented 1 year ago

/assign @wangting0128

wangting0128 commented 1 year ago

/assign @wangting0128

recurrent https://github.com/milvus-io/milvus/issues/25905#issuecomment-1664928218

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

wangting0128 commented 1 year ago

keep an eye on it

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

yanliang567 commented 1 year ago

keep it

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.