cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.01k stars 3.79k forks source link

cli: TestLossOfQuorumRecovery failed #121547

Closed cockroach-teamcity closed 6 months ago

cockroach-teamcity commented 6 months ago

cli.TestLossOfQuorumRecovery failed on master @ 7cea6a90eed456c76dfa07e618c3cd2257b302e5:

=== RUN   TestLossOfQuorumRecovery
    test_log_scope.go:170: test logs captured to: outputs.zip/logTestLossOfQuorumRecovery1785584054
    test_log_scope.go:81: use -show-logs to present logs inline
debug recover collect-info --store=/var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/store-1 /var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/node-1.json
[debug recover collect-info --store=/var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/store-1 /var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/node-1.json]
Collected recovery info from:
nodes             1
stores            1
Collected info:
replicas          67
range descriptors 0
[debug recover make-plan --confirm=y --plan=/var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/recovery-plan.json /var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/node-1.json]
    debug_recover_loss_of_quorum_test.go:235: 
            Error Trace:    github.com/cockroachdb/cockroach/pkg/cli/debug_recover_loss_of_quorum_test.go:235
            Error:          "debug recover make-plan --confirm=y --plan=/var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/recovery-plan.json /var/lib/engflow/worker/work/2/exec/_tmp/33cad9ecca6f841bff6186de727627fd/TestLossOfQuorumRecovery3757563044/node-1.json\nTotal replicas analyzed: 67\nRanges without quorum:   67\nDiscarded live replicas: 0\n\nProposed changes:\n  range r1:/Min updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r2:/System/NodeLiveness updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r3:/System/NodeLivenessMax updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r4:/System/tsd updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r5:/System/\"tse\" updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r6:/Table/0 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r7:/Table/3 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r8:/Table/4 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r9:/Table/5 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r10:/Table/6 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r11:/Table/7 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r12:/Table/8 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r13:/Table/9 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r14:/Table/11 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r15:/Table/12 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r16:/Table/13 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r17:/Table/14 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r18:/Table/15 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r19:/Table/16 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r20:/Table/17 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r21:/Table/18 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r22:/Table/19 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r23:/Table/20 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r24:/Table/21 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r25:/Table/22 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r26:/Table/23 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r27:/Table/24 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r28:/Table/25 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r29:/Table/26 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r30:/Table/27 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r31:/Table/28 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r32:/Table/29 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r33:/NamespaceTable/30 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r34:/NamespaceTable/Max updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r35:/Table/32 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r36:/Table/33 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r37:/Table/34 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r38:/Table/35 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r39:/Table/36 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r40:/Table/37 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r41:/Table/38 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r42:/Table/39 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r43:/Table/40 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r44:/Table/41 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r45:/Table/42 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r46:/Table/43 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r47:/Table/44 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r48:/Table/45 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r49:/Table/46 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r50:/Table/47 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r51:/Table/48 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r52:/Table/50 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r53:/Table/51 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r54:/Table/52 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r55:/Table/53 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r56:/Table/54 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r57:/Table/55 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r58:/Table/56 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r59:/Table/57 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r60:/Table/58 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r61:/Table/59 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r62:/Table/60 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r63:/Table/61 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r64:/Table/62 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n  range r65:/Table/63 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r66:/Table/64 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].\n  range r67:/Table/65 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].\n\nDiscovered dead nodes, will be marked as decommissioned:\n  n2, n3\n\n\nFound replica inconsistencies:\n\nrange has unapplied split operation\n  r67, /{Table/65-Max} rhs r68, /{Table/Max-Max}\n\nOnly proceed as a last resort!\nERROR: can not create plan because of errors and no --force flag is given\n" does not contain "- node n1"
            Test:           TestLossOfQuorumRecovery
            Messages:       planner didn't provide correct apply instructions
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestLossOfQuorumRecovery1785584054
--- FAIL: TestLossOfQuorumRecovery (64.12s)

Parameters:

See also: How To Investigate a Go Test Failure (internal)

/cc @cockroachdb/kv @cockroachdb/server

This test on roachdash | Improve this report!

Jira issue: CRDB-37327

arulajmani commented 6 months ago

I'll be cautious and mark this as a GA-blocker to understand whether the LOQ tool should work with splits or not. Otherwise, I think we just need to disable the split queue for this test to prevent flakes.

andrewbaptist commented 6 months ago

I think this test is invalid. We specifically don't guarantee the LOQ tooling will work if we lose system ranges (particularly meta2). I'm surprised this hasn't flaked more than this.

There are 2 options: 1) Attempt to stop any changes that cause changes to range descriptors. This may help, but there is still a risk that something else will cause this to flake 2) Don't cause a LOQ for any system ranges. This could be done by creating 5 nodes, pinning the scratch range to 3 nodes, and then taking down 2 of those nodes. This will leave the system ranges available and tests the case where we expect it to always work.

Thoughts?

pav-kv commented 6 months ago

The generated plan has the following at the end:

...
  range r66:/Table/64 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].
  range r67:/Table/65 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n3,s3):3,(n2,s2):2].

Discovered dead nodes, will be marked as decommissioned:
  n2, n3

Found replica inconsistencies:

range has unapplied split operation
  r67, /{Table/65-Max} rhs r68, /{Table/Max-Max}

Only proceed as a last resort!
ERROR: can not create plan because of errors and no --force flag is given

In contrast, a successful test run generates this:

...
  range r67:/Table/65 updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].
  range r68:/Table/Max updating replica (n1,s1):1 to (n1,s1):14. Discarding available replicas: [], discarding dead replicas: [(n2,s2):3,(n3,s3):2].

Discovered dead nodes, will be marked as decommissioned:
  n2, n3

Plan created.
To stage recovery application in half-online mode invoke:

cockroach debug recover apply-plan  --host <node-hostname>[:<port>] [--certs-dir <certificates-dir>|--insecure] recovery-plan.json

Alternatively distribute plan to below nodes and invoke 'debug recover apply-plan --store=<store-dir> recovery-plan.json' on:
- node n1, store(s) s1

The LOQ tooling documentation say that we can't recover if "Losing replica state due to a range merge or range split that happened at almost the exact moment that a node failed".

pav-kv commented 6 months ago

Looking at the list of ranges, in a successful run we have 68, with r68 being /Table/Max (the scratch range start key; we create this scratch range at the beginning of the test). In the failed run, we still have 67 ranges. Potentially the last split just failed to complete timely.

We can probably wait for the split to complete on all 3 replicas, before turning down the cluster.

pav-kv commented 6 months ago

Will close this with the above PR which addresses the failure message here.

In the future, if this test fails again we may need to disable queues and maybe more:

I think you need ReplicationManual to disable the split queue. We can consider switching to it if this test fails again. Separately, we could maybe also use WaitForZoneConfigPropagation and WaitForFullReplication before shutting off the nodes.