hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.92k stars 1.95k forks source link

Data erased by moving a raft region leader into another region #14429

Closed zizon closed 2 years ago

zizon commented 2 years ago

Nomad version

Output from nomad version Nomad v1.2.6

Operating system and Environment details

Issue

Moving a raft leader under region A to another region B, by changing region field in config file, causing region B raft state being overwrite by A thus causing all running state in region B being lost.

We had encounter this severe disaster recently. For brief summery, we had,

  1. 10.129.100.233 in a region named not-assign which are aimed to be moving into another region named non-live-stable(raft leader of the cluster that time was 10.129.110.18 )
  2. we did not cleanup/remove the server files of 10.129.100.233, meaning we keep the /var/lib/nomad/server as it was.
  3. after change region in config from not-assign to non-live-stable, and restarted 10.129.100.233, we notice that 10.129.110.18 are being raft log force overwrite.

I will attach the associated log of each latter at bottom.

The root cause of such disaster is in involve a operation out of standard, but maybe we should revise the raft restore procedure.

  1. when 10.129.100.233 restarted, it first restore the old, and incorrect raft state, from persisted state.
  2. then 10.129.100.233 had the raft state of not-assign but configured as a non-live-stable server
  3. via serf, 10.129.100.233 and 10.129.110.18 discovered each other, and of course all other federated servers.
  4. 10.129.110.18, as the that time leader of non-live-stable try to append raft logs to 10.129.100.233, but being rejected as occasionally 10.129.100.233, as was in not-assign , has higher terms.
  5. meanwhile, 10.129.100.233 still running as a leader of not-assign after restart and re-election.
  6. as members of non-live-stable notice a step down of original leader(10.129.110.18), and 10.129.100.233 had add all such members into formed not-assign as non-voters, via serf discovering, a log overwrite occurs.

Reproduction steps

Expected Result

Actual Result

Job file (if appropriate)

Nomad Server logs (if appropriate)

# before restarted
10.129.100.233:

2022-08-30T16:21:40.834+0800 [INFO]  nomad: shutting down server
2022-08-30T16:21:40.834+0800 [WARN]  nomad: serf: Shutdown without a Leave
2022-08-30T16:21:40.834+0800 [INFO]  nomad.raft: aborting pipeline replication: peer="{Voter df0e75eb-6503-cbd3-1889-6ba00b25781a 10.129.104.207:4647}"
2022-08-30T16:21:40.834+0800 [INFO]  nomad.raft: aborting pipeline replication: peer="{Voter 108cc51c-18a7-8fbb-d449-7b25cebfcd45 10.129.104.205:4647}"
2022-08-30T16:21:40.834+0800 [INFO]  nomad: cluster leadership lost
2022-08-30T16:21:40.835+0800 [INFO]  agent: shutdown complete

10.129.110.18:
2022-08-30T16:21:40.853+0800 [ERROR] nomad: failed to fetch policies from authoritative region: error="rpc error: EOF"
# after restart
10.129.100.233:

# startup and initial plugins and kickoff raft
2022-08-30T16:21:40.935+0800 [WARN]  agent.plugin_loader: skipping external plugins since plugin_dir doesn't exist: plugin_dir=/var/lib/nomad/plugins
2022-08-30T16:21:40.939+0800 [INFO]  agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
2022-08-30T16:21:40.939+0800 [INFO]  agent: detected plugin: name=java type=driver plugin_version=0.1.0
2022-08-30T16:21:40.939+0800 [INFO]  agent: detected plugin: name=docker type=driver plugin_version=0.1.0
2022-08-30T16:21:40.939+0800 [INFO]  agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
2022-08-30T16:21:40.939+0800 [INFO]  agent: detected plugin: name=exec type=driver plugin_version=0.1.0
2022-08-30T16:21:40.953+0800 [INFO]  nomad.raft: restored from snapshot: id=9739-1285904-1661462873769

# with old raft config
2022-08-30T16:21:40.993+0800 [INFO]  nomad.raft: initial configuration: index=1198325 servers="[{Suffrage:Voter ID:1381271f-4333-b5ad-a03a-262e0a4e7b7b Address:10.129.100.233:4647} {Suffrage:Voter ID:108cc51c-18a7-8fbb-d449-7b25cebfcd45 Address:10.129.104.205:4647} {Suffrage:Voter ID:df0e75eb-6503-cbd3-1889-6ba00b25781a Address:10.129.104.207:4647}]"
2022-08-30T16:21:40.993+0800 [INFO]  nomad.raft: entering follower state: follower="Node at 10.129.100.233:4647 [Follower]" leader=

# ordinal serf discovering
2022-08-30T16:21:40.994+0800 [INFO]  nomad: serf: Attempting re-join to previously known node: sg2-shopee-sz_devops-worker-test-10-129-104-205.not-assign: 10.129.104.205:4648
2022-08-30T16:21:40.994+0800 [INFO]  nomad: adding server: server="supply-chain-test-node9.non-live-stable (Addr: 10.129.100.233:4647) (DC: legacy)"
2022-08-30T16:21:40.995+0800 [INFO]  client: using state directory: state_dir=/var/lib/nomad/client
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-resource-budget-test-10-129-110-18.non-live-stable 10.129.110.18
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-cache-codis-test-10-129-105-10.non-live 10.129.105.10
2022-08-30T16:21:40.996+0800 [INFO]  nomad: adding server: server="sg2-shopee-resource-budget-test-10-129-110-18.non-live-stable (Addr: 10.129.110.18:4647) (DC: legacy)"
2022-08-30T16:21:40.996+0800 [INFO]  nomad: adding server: server="sg2-shopee-cache-codis-test-10-129-105-10.non-live (Addr: 10.129.105.10:4647) (DC: not-assign)"
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-sz_devops-worker-test-10-129-104-205.not-assign 10.129.104.205
2022-08-30T16:21:40.996+0800 [INFO]  nomad: adding server: server="sg2-shopee-sz_devops-worker-test-10-129-104-205.not-assign (Addr: 10.129.104.205:4647) (DC: not-assign)"
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: supply-chain-test-node3.non-live-stable 10.129.97.143
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-sz_devops-worker-test-10-129-104-207.not-assign 10.129.104.207
2022-08-30T16:21:40.996+0800 [INFO]  nomad: adding server: server="supply-chain-test-node3.non-live-stable (Addr: 10.129.97.143:4647) (DC: legacy)"
2022-08-30T16:21:40.996+0800 [INFO]  nomad: adding server: server="sg2-shopee-sz_devops-worker-test-10-129-104-207.not-assign (Addr: 10.129.104.207:4647) (DC: not-assign)"
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-resource-budget-test-10-129-110-13.non-live-stable 10.129.110.13
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: supply-chain-test-node9.not-assign 10.129.100.233
2022-08-30T16:21:40.996+0800 [INFO]  nomad: adding server: server="sg2-shopee-resource-budget-test-10-129-110-13.non-live-stable (Addr: 10.129.110.13:4647) (DC: legacy)"
2022-08-30T16:21:40.996+0800 [INFO]  nomad: adding server: server="supply-chain-test-node9.not-assign (Addr: 10.129.100.233:4647) (DC: not-assign)"
2022-08-30T16:21:40.996+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-resource-budget-test-10-129-109-231.non-live 10.129.109.231
2022-08-30T16:21:40.997+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-resource-budget-test-10-129-110-16.non-live-stable 10.129.110.16
2022-08-30T16:21:40.997+0800 [INFO]  nomad: serf: EventMemberJoin: sg2-shopee-resource-budget-test-10-129-109-220.non-live 10.129.109.220
2022-08-30T16:21:40.997+0800 [INFO]  nomad: adding server: server="sg2-shopee-resource-budget-test-10-129-109-231.non-live (Addr: 10.129.109.231:4647) (DC: not-assign)"
2022-08-30T16:21:40.997+0800 [INFO]  nomad: adding server: server="sg2-shopee-resource-budget-test-10-129-110-16.non-live-stable (Addr: 10.129.110.16:4647) (DC: legacy)"
2022-08-30T16:21:40.997+0800 [INFO]  nomad: serf: Re-joined to previously known node: sg2-shopee-sz_devops-worker-test-10-129-104-205.not-assign: 10.129.104.205:4648
2022-08-30T16:21:40.997+0800 [INFO]  nomad: adding server: server="sg2-shopee-resource-budget-test-10-129-109-220.non-live (Addr: 10.129.109.220:4647) (DC: not-assign)"

# elected as leader with old raft config, note the winning 2 and the replication peer, indicate it is elected as not-assing leader.
2022-08-30T16:21:42.929+0800 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader=
2022-08-30T16:21:42.929+0800 [INFO]  nomad.raft: entering candidate state: node="Node at 10.129.100.233:4647 [Candidate]" term=9740
2022-08-30T16:21:42.972+0800 [INFO]  nomad.raft: election won: tally=2
2022-08-30T16:21:42.972+0800 [INFO]  nomad.raft: entering leader state: leader="Node at 10.129.100.233:4647 [Leader]"
2022-08-30T16:21:42.972+0800 [INFO]  nomad.raft: added peer, starting replication: peer=108cc51c-18a7-8fbb-d449-7b25cebfcd45
2022-08-30T16:21:42.972+0800 [INFO]  nomad.raft: added peer, starting replication: peer=df0e75eb-6503-cbd3-1889-6ba00b25781a
2022-08-30T16:21:42.972+0800 [INFO]  nomad: cluster leadership acquired

10.129.110.18:

# discover joined non-live-stable 10.129.100.233 with correct serf config(note the .non-live-stable of serf suffix)
2022-08-30T16:21:41.023+0800 [INFO]  nomad: serf: EventMemberJoin: supply-chain-test-node9.non-live-stable 10.129.100.233
2022-08-30T16:21:41.023+0800 [INFO]  nomad: adding server: server="supply-chain-test-node9.non-live-stable (Addr: 10.129.100.233:4647) (DC: legacy)"

# then adding to non-live-stable raft group via discovered serf hint
2022-08-30T16:21:41.024+0800 [INFO]  nomad.raft: updating configuration: command=AddNonvoter server-id=1381271f-4333-b5ad-a03a-262e0a4e7b7b server-addr=10.129.100.233:4647 servers="[{Suffrage:Voter ID:4ef0b9ff-b2bc-f42e-de2d-251721f64b54 Address:10.129.110.18:4647} {Suffrage:Voter ID:72af9e77-228c-f9b7-73b2-9506e0cfb004 Address:10.129.110.16:4647} {Suffrage:Voter ID:055f8de6-e53f-8ec0-bcbe-b4779c7c06cc Address:10.129.97.143:4647} {Suffrage:Voter ID:ee2ec969-f1a9-0228-94b4-682292efd456 Address:10.129.110.13:4647} {Suffrage:Nonvoter ID:1381271f-4333-b5ad-a03a-262e0a4e7b7b Address:10.129.100.233:4647}]"

# since it was the leader of non-live-stable at that time, try replicate log
2022-08-30T16:21:41.051+0800 [INFO]  nomad.raft: added peer, starting replication: peer=1381271f-4333-b5ad-a03a-262e0a4e7b7b

# but rejected as the log indicated
2022-08-30T16:21:41.079+0800 [ERROR] nomad.raft: peer has newer term, stopping replication: peer="{Nonvoter 1381271f-4333-b5ad-a03a-262e0a4e7b7b 10.129.100.233:4647}"

# then abort the leading and restart election
2022-08-30T16:21:41.079+0800 [INFO]  nomad.raft: entering follower state: follower="Node at 10.129.110.18:4647 [Follower]" leader=
2022-08-30T16:21:41.079+0800 [INFO]  nomad.raft: aborting pipeline replication: peer="{Voter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}"
2022-08-30T16:21:41.079+0800 [INFO]  nomad.raft: aborting pipeline replication: peer="{Voter 055f8de6-e53f-8ec0-bcbe-b4779c7c06cc 10.129.97.143:4647}"
2022-08-30T16:21:41.079+0800 [INFO]  nomad.raft: aborting pipeline replication: peer="{Voter ee2ec969-f1a9-0228-94b4-682292efd456 10.129.110.13:4647}"
2022-08-30T16:21:41.079+0800 [INFO]  nomad: cluster leadership lost
2022-08-30T16:21:41.960+0800 [INFO]  nomad: successfully contacted Nomad servers: num_servers=5

# then 
2022-08-30T16:21:42.484+0800 [INFO]  nomad.raft: duplicate requestVote for same term: term=344
2022-08-30T16:21:42.767+0800 [INFO]  nomad.raft: duplicate requestVote for same term: term=344
# overwrite

10.129.100.233:

# getting notice from voters of non-live-stable
2022-08-30T16:21:43.021+0800 [INFO]  nomad.raft: updating configuration: command=AddNonvoter server-id=72af9e77-228c-f9b7-73b2-9506e0cfb004 server-addr=10.129.110.16:4647 servers="[{Suffrage:Voter ID:1381271f-4333-b5ad-a03a-262e0a4e7b7b Address:10.129.100.233:4647} {Suffrage:Voter ID:108cc51c-18a7-8fbb-d449-7b25cebfcd45 Address:10.129.104.205:4647} {Suffrage:Voter ID:df0e75eb-6503-cbd3-1889-6ba00b25781a Address:10.129.104.207:4647} {Suffrage:Nonvoter ID:72af9e77-228c-f9b7-73b2-9506e0cfb004 Address:10.129.110.16:4647}]"
2022-08-30T16:21:43.022+0800 [INFO]  nomad.raft: added peer, starting replication: peer=72af9e77-228c-f9b7-73b2-9506e0cfb004
2022-08-30T16:21:43.022+0800 [INFO]  nomad.raft: updating configuration: command=AddNonvoter server-id=4ef0b9ff-b2bc-f42e-de2d-251721f64b54 server-addr=10.129.110.18:4647 servers="[{Suffrage:Voter ID:1381271f-4333-b5ad-a03a-262e0a4e7b7b Address:10.129.100.233:4647} {Suffrage:Voter ID:108cc51c-18a7-8fbb-d449-7b25cebfcd45 Address:10.129.104.205:4647} {Suffrage:Voter ID:df0e75eb-6503-cbd3-1889-6ba00b25781a Address:10.129.104.207:4647} {Suffrage:Nonvoter ID:72af9e77-228c-f9b7-73b2-9506e0cfb004 Address:10.129.110.16:4647} {Suffrage:Nonvoter ID:4ef0b9ff-b2bc-f42e-de2d-251721f64b54 Address:10.129.110.18:4647}]"
2022-08-30T16:21:43.023+0800 [INFO]  nomad.raft: added peer, starting replication: peer=4ef0b9ff-b2bc-f42e-de2d-251721f64b54
2022-08-30T16:21:43.024+0800 [INFO]  nomad.raft: updating configuration: command=AddNonvoter server-id=ee2ec969-f1a9-0228-94b4-682292efd456 server-addr=10.129.110.13:4647 servers="[{Suffrage:Voter ID:1381271f-4333-b5ad-a03a-262e0a4e7b7b Address:10.129.100.233:4647} {Suffrage:Voter ID:108cc51c-18a7-8fbb-d449-7b25cebfcd45 Address:10.129.104.205:4647} {Suffrage:Voter ID:df0e75eb-6503-cbd3-1889-6ba00b25781a Address:10.129.104.207:4647} {Suffrage:Nonvoter ID:72af9e77-228c-f9b7-73b2-9506e0cfb004 Address:10.129.110.16:4647} {Suffrage:Nonvoter ID:4ef0b9ff-b2bc-f42e-de2d-251721f64b54 Address:10.129.110.18:4647} {Suffrage:Nonvoter ID:ee2ec969-f1a9-0228-94b4-682292efd456 Address:10.129.110.13:4647}]"
2022-08-30T16:21:43.024+0800 [INFO]  nomad.raft: added peer, starting replication: peer=ee2ec969-f1a9-0228-94b4-682292efd456
2022-08-30T16:21:43.025+0800 [INFO]  nomad.raft: updating configuration: command=AddNonvoter server-id=055f8de6-e53f-8ec0-bcbe-b4779c7c06cc server-addr=10.129.97.143:4647 servers="[{Suffrage:Voter ID:1381271f-4333-b5ad-a03a-262e0a4e7b7b Address:10.129.100.233:4647} {Suffrage:Voter ID:108cc51c-18a7-8fbb-d449-7b25cebfcd45 Address:10.129.104.205:4647} {Suffrage:Voter ID:df0e75eb-6503-cbd3-1889-6ba00b25781a Address:10.129.104.207:4647} {Suffrage:Nonvoter ID:72af9e77-228c-f9b7-73b2-9506e0cfb004 Address:10.129.110.16:4647} {Suffrage:Nonvoter ID:4ef0b9ff-b2bc-f42e-de2d-251721f64b54 Address:10.129.110.18:4647} {Suffrage:Nonvoter ID:ee2ec969-f1a9-0228-94b4-682292efd456 Address:10.129.110.13:4647} {Suffrage:Nonvoter ID:055f8de6-e53f-8ec0-bcbe-b4779c7c06cc Address:10.129.97.143:4647}]"

# since peer has "incorrect history", 10.129.100.233, as leader, forcibly overwrite each voters in non-live-stable(as non-voter in not-assing as it haven't catch up with leader)
2022-08-30T16:21:43.025+0800 [INFO]  nomad.raft: added peer, starting replication: peer=055f8de6-e53f-8ec0-bcbe-b4779c7c06cc
2022-08-30T16:21:43.066+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293467
2022-08-30T16:21:43.067+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293466
2022-08-30T16:21:43.067+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293465
2022-08-30T16:21:43.067+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293464
2022-08-30T16:21:43.068+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293463
2022-08-30T16:21:43.068+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293462
2022-08-30T16:21:43.068+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293461
2022-08-30T16:21:43.069+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 72af9e77-228c-f9b7-73b2-9506e0cfb004 10.129.110.16:4647}" next=1293460
2022-08-30T16:21:43.069+0800 [WARN]  nomad.raft: appendEntries rejected, sending older logs: peer="{Nonvoter 4ef0b9ff-b2bc-f42e-de2d-251721f64b54 10.129.110.18:4647}" next=1293468
202

10.129.110.18:

# the raft had discovered 10.129.100.233 as non-live-stable raft members(the num_servers = 5, 4 original raft member plus newly discovered malformed 10.129.100.233 with an incorrect leader state)
2022-08-30T16:21:41.960+0800 [INFO]  nomad: successfully contacted Nomad servers: num_servers=5
2022-08-30T16:21:42.484+0800 [INFO]  nomad.raft: duplicate requestVote for same term: term=344
2022-08-30T16:21:42.767+0800 [INFO]  nomad.raft: duplicate requestVote for same term: term=344

# the trying catch up with leader history
2022-08-30T16:21:43.069+0800 [WARN]  nomad.raft: failed to get previous log: previous-index=1293468 last-index=2344000 error="log not found"
2022-08-30T16:21:43.070+0800 [WARN]  nomad.raft: failed to get previous log: previous-index=1293467 last-index=2344000 error="log not found"
2022-08-30T16:21:43.070+0800 [WARN]  nomad.raft: failed to get previous log: previous-index=1293466 last-index=2344000 error="log not found"
2022-08-30T16:21:43.070+0800 [WARN]  nomad.raft: failed to get previous log: previous-index=1293465 last-index=2344000 error="log not found"
2022-08-30T16:21:43.071+0800 [WARN]  nomad.raft: failed to get previous log: previous-index=1293464 last-index=2344000 error="log not found"
2022-08-30T16:21:43.071+0800 [WARN]  nomad.raft: failed to get previous log: previous-index=1293463 last-index=2344000 error="log not found"

Nomad Client logs (if appropriate)

lgfa29 commented 2 years ago

Hi @zizon :wave:

I'm sorry to hear you went through an experience like this but, unfortunately, the Raft replication process worked as designed here. This scenario is similar to if you were to add a test database server to a production cluster with data replication enabled: your production data would be overwritten with test data.

In terms of how we can improve Nomad to avoid situations like this, preventing the agent region to change if there's any state stored in the node could be a good safety check. I opened https://github.com/hashicorp/nomad/issues/14444 to track this enhancement.

We can also improve our documentation to highlight that region (and potentially other) configuration fields should not be changed. Explaining how to use the nomad operator snapshot save and nomad operator snapshot restore to backup data could be useful as well. https://github.com/hashicorp/nomad/pull/14443 adds this warning to region.

I linked back to this issue as you have provided great detail and information, but since this was not caused by a bug in Nomad I will go ahead and close it. Feel fee to reach out if you have any other questions.

zizon commented 2 years ago

I agree that it is not a bug for raft but there are bigger issues under this design/usage flaw. I will disclose more on #14444 .

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.