scylladb / scylla-manager

The Scylla Manager
https://manager.docs.scylladb.com/stable/
Other
51 stars 34 forks source link

Repair task failed after an hour with zero token nodes in multi dc configuration #4078

Open aleksbykov opened 2 hours ago

aleksbykov commented 2 hours ago

Packages

Scylla version: 6.2.0-20241013.b8a9fd4e49e8 with build-id a61f658b0408ba10663812f7a3b4d6aea7714fac

Kernel Version: 6.8.0-1016-aws Scylla Manager Agent 3.3.3-0.20240912.924034e0d

Issue description

Cluster configured with zero token nodes and multi dc configuration. There are DC: "eu-west-1" with 3 data nodes, DC: "eu-west-2": 3 data nodes and 1 zero token nodes, DC: "eu-north-1": 1 zero token node.

Nemesis 'disrupt_mgmt_corrupt_then_repair' was failed. This nemesis stops scylla , remove several sstables, start scylla and then trigger repair from scylla manager. Nemesis chose node4 (data node) as target node. It remove sstables after scylla was stopped. And after scylla was started triggered repair from scylla manager: Repair task was failed after an hour:

sdcm.mgmt.common.ScyllaManagerError: Task: repair/362a4112-02b8-47f3-ae49-49c47600de51 final status is: ERROR.
Task progress string: Run:      a1edb893-8c11-11ef-bb82-0a7de1e926c3
Status:     ERROR
Cause:      see more errors in logs: master 10.4.2.208 keyspace keyspace1 table standard1 command 6: status FAILED
Start time: 16 Oct 24 22:54:43 UTC
End time:   17 Oct 24 00:06:52 UTC
Duration:   1h12m9s
Progress:   0%/99%
Intensity:  1
Parallel:   0
Datacenters:    
  - eu-northscylla_node_north
  - eu-west-2scylla_node_west
  - eu-westscylla_node_west

╭───────────────────────────────┬────────────────────────────────┬──────────┬──────────╮
│ Keyspace                      │                          Table │ Progress │ Duration │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ keyspace1                     │                      standard1 │ 0%/100%  │ 1h11m50s │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ system_distributed_everywhere │ cdc_generation_descriptions_v2 │ 100%     │ 0s       │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ system_distributed            │      cdc_generation_timestamps │ 100%     │ 0s       │
│ system_distributed            │    cdc_streams_descriptions_v2 │ 100%     │ 0s       │
│ system_distributed            │                 service_levels │ 100%     │ 0s       │
│ system_distributed            │              view_build_status │ 100%     │ 0s       │
╰───────────────────────────────┴────────────────────────────────┴──────────┴──────────╯

Next error found in scylla manager log in "monitor-set-2bc4de73.tar.gz":

Oct 17 00:06:41 multi-dc-rackaware-with-znode-dc-fe-monitor-node-2bc4de73-1 scylla-manager[7935]: {"L":"ERROR","T":"2024-10-17T00:06:41.197Z","N":"repair.keyspace1.standard1","M":"Repair failed","error":"master 10.4.2.208 keyspace keyspace1 table standard1 command 6: status FAILED","_trace_id":"MQddNqAdRnuC207sElnpJg","errorStack":"github.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).runRepair.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:58\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).runRepair\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:100\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).HandleJob\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:30\ngithub.com/scylladb/scylla-manager/v3/pkg/util/workerpool.(*Pool[...]).spawn.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/util@v0.0.0-20240902115944-7914bb0d3b80/workerpool/pool.go:99\nruntime.goexit\n\truntime/asm_amd64.s:1695\n","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/go-log@v0.0.7/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*tableGenerator).processResult\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:334\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*tableGenerator).Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:219\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*generator).Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:148\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*Service).Repair\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/service.go:304\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.Runner.Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/runner.go:26\ngithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler.PolicyRunner.Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler/policy.go:32\ngithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler.(*Service).run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler/service.go:448\ngithub.com/scylladb/scylla-manager/v3/pkg/scheduler.(*Scheduler[...]).asyncRun.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/scheduler/scheduler.go:401"}

This could be related to zero token nodes in cofiguration.

Impact

Repair process failed from scylla manager.

Installation details

Cluster size: 6 nodes (i4i.4xlarge)

Scylla Nodes used in this run:

OS / Image: ami-01f5cd2cb7c8dbd6f ami-0a32db7034cf41d95 ami-0b2b4e9fba26c7618 (aws: undefined_region)

Test: longevity-multi-dc-rack-aware-zero-token-dc Test id: 2bc4de73-4328-4444-b601-6bd88060fa4d Test name: scylla-staging/abykov/longevity-multi-dc-rack-aware-zero-token-dc Test method: longevity_test.LongevityTest.test_custom_time Test config file(s):

Logs and commands - Restore Monitor Stack command: `$ hydra investigate show-monitor 2bc4de73-4328-4444-b601-6bd88060fa4d` - Restore monitor on AWS instance using [Jenkins job](https://jenkins.scylladb.com/view/QA/job/QA-tools/job/hydra-show-monitor/parambuild/?test_id=2bc4de73-4328-4444-b601-6bd88060fa4d) - Show all stored logs command: `$ hydra investigate show-logs 2bc4de73-4328-4444-b601-6bd88060fa4d` ## Logs: - **db-cluster-2bc4de73.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/db-cluster-2bc4de73.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/db-cluster-2bc4de73.tar.gz) - **sct-runner-events-2bc4de73.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/sct-runner-events-2bc4de73.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/sct-runner-events-2bc4de73.tar.gz) - **sct-2bc4de73.log.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/sct-2bc4de73.log.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/sct-2bc4de73.log.tar.gz) - **loader-set-2bc4de73.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/loader-set-2bc4de73.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/loader-set-2bc4de73.tar.gz) - **monitor-set-2bc4de73.tar.gz** - [https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/monitor-set-2bc4de73.tar.gz](https://cloudius-jenkins-test.s3.amazonaws.com/2bc4de73-4328-4444-b601-6bd88060fa4d/20241017_044804/monitor-set-2bc4de73.tar.gz) [Jenkins job URL](https://jenkins.scylladb.com/job/scylla-staging/job/abykov/job/longevity-multi-dc-rack-aware-zero-token-dc/26/) [Argus](https://argus.scylladb.com/test/bbd702fb-2f87-4b0b-a068-c2c83d74cb77/runs?additionalRuns[]=2bc4de73-4328-4444-b601-6bd88060fa4d)
aleksbykov commented 2 hours ago

@kbr-scylla @patjed41 , this but is not related to scylla directly ( at least i didn't find any issue in scylla logs) but scylla manager repair task failed and looks like it could be related to zero token nodes

kbr-scylla commented 2 hours ago

Could be that support for zero-token nodes needs to be explicitly implemented in Scylla Manager. Maybe it assumes that every node has tokens