scylladb / scylla-operator

The Kubernetes Operator for ScyllaDB
https://operator.docs.scylladb.com/
Apache License 2.0
331 stars 163 forks source link

Operator duplicates `decommission` API calls while the operation is ongoing from first call generating false error messages in DB log #1237

Open vponomaryov opened 1 year ago

vponomaryov commented 1 year ago

Describe the bug Running decommission DB pod/node operation we have following log messages in the target pod logs:

INFO  2023-04-30 02:07:20,147 [shard  6] api - decommission
INFO  2023-04-30 02:07:20,148 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Started decommission operation: node=960bb850-0c7a-4f68-876a-2d296d4348f4/172.20.90.76
INFO  2023-04-30 02:07:20,148 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: sync_nodes=decommission, ignore_nodes=960bb850-0c7a-4f68-876a-2d296d4348f4
INFO  2023-04-30 02:07:20,154 [shard  0] storage_service - DECOMMISSIONING: starts
INFO  2023-04-30 02:07:20,154 [shard  0] raft_group0 - Performing a group 0 read barrier...
INFO  2023-04-30 02:07:20,155 [shard  0] raft_group0 - Finished group 0 read barrier.
INFO  2023-04-30 02:07:20,155 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Started heartbeat_updater (interval=10s)
INFO  2023-04-30 02:07:20,155 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Started decommission_prepare[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: ignore_nodes=[], leaving_nodes=[172.20.90.76], replace_nodes={}, bootstrap_nodes={}, repair_tables=[]
INFO  2023-04-30 02:07:20,156 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Added node=172.20.90.76 as leaving node, coordinator=172.20.90.76
INFO  2023-04-30 02:07:20,191 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Finished decommission_prepare[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: ignore_nodes=[], leaving_nodes=[172.20.90.76], replace_nodes={}, bootstrap_nodes={}, repair_tables=[]
INFO  2023-04-30 02:07:20,191 [shard  0] storage_service - DECOMMISSIONING: unbootstrap starts
INFO  2023-04-30 02:07:20,191 [shard  0] storage_service - Started batchlog replay for decommission
INFO  2023-04-30 02:07:20,192 [shard  0] storage_service - Finished batchlog replay for decommission
INFO  2023-04-30 02:07:20,192 [shard  0] storage_service - enable_repair_based_node_ops=true, allowed_repair_based_node_ops={decommission ,bootstrap ,rebuild ,removenode ,replace}
INFO  2023-04-30 02:07:20,194 [shard  0] repair - decommission_with_repair: started with keyspaces=ks_truncate_large_partitionkeyspace1system_authsystem_tracessystem_distributedsystem_distributed_everywhere, leaving_node=172.20.90.76, ignore_nodes=[]
...
E0430 02:07:35.150528       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: after 15s: timeout
INFO  2023-04-30 02:07:35,170 [shard  3] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:35.173Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"5ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:35 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:35.177395       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:07:35,199 [shard  9] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:35.201Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"2ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:34 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:35.204534       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:07:35,236 [shard 10] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:35.238Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"2ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:34 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:35.242395       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
I0430 02:07:35.288943       1 sidecar/probes.go:122] "readyz probe: node is not ready" Service="scylla/sct-cluster-us-east1-b-us-east1-1-0"
INFO  2023-04-30 02:07:35,289 [shard  4] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:35.291Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"2ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:35 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:35.294287       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:07:35,383 [shard  5] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:35.386Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"3ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:35 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:35.388904       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:07:35,557 [shard 13] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:35.560Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"3ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:34 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:35.563530       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:07:35,887 [shard  0] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:35.888Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"1ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:34 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:35.891407       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:07:36,537 [shard  1] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:36.538Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"1ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:35 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:36.538843       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:07:37,828 [shard  2] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:37.830Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"2ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:37 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:37.834239       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
...
INFO  2023-04-30 02:07:40,402 [shard  3] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:40.404Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"2ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:40 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:40.407930       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
...
INFO  2023-04-30 02:07:45,530 [shard  4] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:45.530Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"0ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:45 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:45.531543       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
...
INFO  2023-04-30 02:07:55,772 [shard 10] api - decommission
{"L":"INFO","T":"2023-04-30T02:07:55.773Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"0ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:07:54 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:07:55.773492       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
I0430 02:08:05.273587       1 sidecar/probes.go:122] "readyz probe: node is not ready" Service="scylla/sct-cluster-us-east1-b-us-east1-1-0"
I0430 02:08:15.274259       1 sidecar/probes.go:122] "readyz probe: node is not ready" Service="scylla/sct-cluster-us-east1-b-us-east1-1-0"
INFO  2023-04-30 02:08:16,255 [shard 12] api - decommission
{"L":"INFO","T":"2023-04-30T02:08:16.255Z","N":"scylla_client","M":"HTTP","host":"localhost:10000","method":"POST","uri":"/storage_service/decommission","duration":"0ms","status":500,"bytes":97,"dump":"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 97\r\nContent-Type: application/json\r\nDate: Sun, 30 Apr 2023 02:08:15 GMT\r\nServer: Seastar httpd\r\n\r\n{\"message\": \"std::runtime_error (Operation decommission is in progress, try again)\", \"code\": 500}"}
E0430 02:08:16.256211       1 sidecar/controller.go:141] syncing key 'scylla/sct-cluster-us-east1-b-us-east1-1-0' failed: can't decommision a node: can't decommission the node: agent [HTTP 500] std::runtime_error (Operation decommission is in progress, try again)
INFO  2023-04-30 02:08:19,891 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: left token ring
INFO  2023-04-30 02:08:19,891 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Stopped heartbeat_updater
INFO  2023-04-30 02:08:19,891 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Started decommission_done[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: ignore_nodes=[], leaving_nodes=[172.20.90.76], replace_nodes={}, bootstrap_nodes={}, repair_tables=[]
INFO  2023-04-30 02:08:19,891 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Started to check if nodes=[172.20.90.76] have left the cluster, coordinator=172.20.90.76
INFO  2023-04-30 02:08:19,891 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Finished to check if nodes=[172.20.90.76] have left the cluster, coordinator=172.20.90.76
INFO  2023-04-30 02:08:19,891 [shard  0] storage_service - decommission[52b651c0-7a1f-45e0-8d86-6ec00e0bd3f4]: Marked ops done from coordinator=172.20.90.76

There are 13 total redundant API calls to the Scylla. Where 7 of them are done in less than 1s. All of it look like run of the same API query with attempt to decommission the same node again and again using the exponential back-off until the running decommission operation succeeds as a result for the very first API call.

So, the problems caused by this behavior:

To Reproduce Steps to reproduce the behavior:

  1. Deploy a 3-4 node DB cluster
  2. Decommission 1 pod
  3. See messages about redundant API calls in the logs of that pod

Expected behavior Scylla-operator must not do redundant API calls and, moreover, log false errors like here.

Logs CI job: https://jenkins.scylladb.com/job/scylla-operator/job/operator-master/job/eks/job/longevity-scylla-operator-3h-eks/76 db-cluster: https://cloudius-jenkins-test.s3.amazonaws.com/9c3e6d95-056c-4bae-be3a-9140da5e4d52/20230430_044725/db-cluster-9c3e6d95.tar.gz sct-runner-events: https://cloudius-jenkins-test.s3.amazonaws.com/9c3e6d95-056c-4bae-be3a-9140da5e4d52/20230430_044725/sct-runner-events-9c3e6d95.tar.gz sct-runner-log: https://cloudius-jenkins-test.s3.amazonaws.com/9c3e6d95-056c-4bae-be3a-9140da5e4d52/20230430_044725/sct-9c3e6d95.log.tar.gz kubernetes-log: https://cloudius-jenkins-test.s3.amazonaws.com/9c3e6d95-056c-4bae-be3a-9140da5e4d52/20230430_044725/kubernetes-9c3e6d95.tar.gz

Environment:

soyacz commented 1 year ago

another reproduction:

Installation details

Kernel Version: 5.10.178-162.673.amzn2.x86_64 Scylla version (or git commit hash): 5.3.0~dev-20230512.7fcc4031229b with build-id d6f9b433d295cf0420d28abedc89ff756eb0b75e

Operator Image: scylladb/scylla-operator:latest Operator Helm Version: v1.9.0-alpha.3-5-g34369da Operator Helm Repository: https://storage.googleapis.com/scylla-operator-charts/latest Cluster size: 4 nodes (i3.4xlarge)

Scylla Nodes used in this run: No resources left at the end of the run

OS / Image: `` (k8s-eks: eu-north-1)

Test: longevity-scylla-operator-3h-eks Test id: be583031-c65c-41ae-9bc9-359c7d6739d6 Test name: scylla-operator/operator-master/eks/longevity-scylla-operator-3h-eks Test config file(s):

Logs and commands - Restore Monitor Stack command: `$ hydra investigate show-monitor be583031-c65c-41ae-9bc9-359c7d6739d6` - Restore monitor on AWS instance using [Jenkins job](https://jenkins.scylladb.com/view/QA/job/QA-tools/job/hydra-show-monitor/parambuild/?test_id=be583031-c65c-41ae-9bc9-359c7d6739d6) - Show all stored logs command: `$ hydra investigate show-logs be583031-c65c-41ae-9bc9-359c7d6739d6` ## Logs: - **db-cluster-be583031.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/db-cluster-be583031.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/db-cluster-be583031.tar.gz) - **sct-runner-events-be583031.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/sct-runner-events-be583031.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/sct-runner-events-be583031.tar.gz) - **sct-be583031.log.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/sct-be583031.log.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/sct-be583031.log.tar.gz) - **monitor-set-be583031.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/monitor-set-be583031.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/monitor-set-be583031.tar.gz) - **loader-set-be583031.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/loader-set-be583031.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/loader-set-be583031.tar.gz) - **kubernetes-be583031.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/kubernetes-be583031.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/kubernetes-be583031.tar.gz) - **parallel-timelines-report-be583031.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/parallel-timelines-report-be583031.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/be583031-c65c-41ae-9bc9-359c7d6739d6/20230514_075841/parallel-timelines-report-be583031.tar.gz) [Jenkins job URL](https://jenkins.scylladb.com/job/scylla-operator/job/operator-master/job/eks/job/longevity-scylla-operator-3h-eks/83/)
zimnx commented 1 year ago

Duplicated logs isn't a real issue.

The root cause of it is that scylla doesn't immediately switch gossip status of a node once we trigger decommision, so there may be multiple calls in between UN -> Decommisionning change.

But as far as I know, it doesn't interrupt service anyhow.

vponomaryov commented 1 year ago

Duplicated logs isn't a real issue.

The root cause of it is that scylla doesn't immediately switch gossip status of a node once we trigger decommision, so there may be multiple calls in between UN -> Decommisionning change.

But as far as I know, it doesn't interrupt service anyhow.

Technically, it is not duplicated logs, it is logs of made API calls. Then, the issue about it is that for the scylla-operator it is common API call retries, but for us it is false negatives.

How should we distinguish real decommission problem from a retry?

zimnx commented 1 year ago

Operator constantly retries actions until they succeed. You may observe multiple retries everywhere.

How should we distinguish real decommission problem from a retry?

Treat it as a black box, observe actions and status, but don't look inside.

If decommission won't happen in reasonable time, consider it failed. Number of retries doesn't matter as long required actions happens.

scylla-operator-bot[bot] commented 1 month ago

The Scylla Operator project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

/lifecycle stale

scylla-operator-bot[bot] commented 3 weeks ago

The Scylla Operator project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

/lifecycle rotten