redpanda-data / redpanda

Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!
https://redpanda.com
9.59k stars 584 forks source link

CI Failure (TimeoutError: KgoVerifierConsumerGroupConsumer didn't complete in 599 seconds) in `RandomNodeOperationsTest.test_node_operations` #21345

Closed bashtanov closed 2 months ago

bashtanov commented 3 months ago

https://buildkite.com/redpanda/redpanda/builds/51258

Module: rptest.tests.random_node_operations_test
Class:  RandomNodeOperationsTest
Method: test_node_operations
Arguments:
{
  "enable_failures": true,
  "num_to_upgrade": 0,
  "with_tiered_storage": false
}
test_id:    rptest.tests.random_node_operations_test.RandomNodeOperationsTest.test_node_operations.enable_failures=True.num_to_upgrade=0.with_tiered_storage=False
status:     FAIL
run time:   17 minutes 37.876 seconds

    TimeoutError("KgoVerifierConsumerGroupConsumer-4-140678831673152 didn't complete in 599.9999985694885 seconds")
Traceback (most recent call last):
  File "/opt/.ducktape-venv/lib/python3.10/site-packages/ducktape/tests/runner_client.py", line 184, in _do_run
    data = self.run_test()
  File "/opt/.ducktape-venv/lib/python3.10/site-packages/ducktape/tests/runner_client.py", line 276, in run_test
    return self.test_context.function(self.test)
  File "/opt/.ducktape-venv/lib/python3.10/site-packages/ducktape/mark/_mark.py", line 535, in wrapper
    return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File "/root/tests/rptest/services/cluster.py", line 105, in wrapped
    r = f(self, *args, **kwargs)
  File "/root/tests/rptest/tests/random_node_operations_test.py", line 484, in test_node_operations
    compacted_producer_consumer.verify()
  File "/root/tests/rptest/tests/random_node_operations_test.py", line 262, in verify
    self.consumer.wait()
  File "/opt/.ducktape-venv/lib/python3.10/site-packages/ducktape/services/service.py", line 287, in wait
    if not self.wait_node(node, end - now):
  File "/root/tests/rptest/services/kgo_verifier_services.py", line 251, in wait_node
    return self._do_wait_node(node, timeout_sec)
  File "/root/tests/rptest/services/kgo_verifier_services.py", line 287, in _do_wait_node
    self._redpanda.wait_until(
  File "/root/tests/rptest/services/redpanda.py", line 1054, in wait_until
    wait_until(wrapped,
  File "/opt/.ducktape-venv/lib/python3.10/site-packages/ducktape/utils/util.py", line 57, in wait_until
    raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from last_exception
ducktape.errors.TimeoutError: KgoVerifierConsumerGroupConsumer-4-140678831673152 didn't complete in 599.9999985694885 seconds

JIRA Link: CORE-5576

bashtanov commented 3 months ago

My findings so far: KgoVerifier requests offsets from nodes, and one of the nodes consistently replies it's not a leader for one of the partitions. The reason it was the last leader, and then raft fails to re-elect a leader for 10 minutes. The old leader is neither a leader nor it attempts to become one. It still has the most up-to-date logs, so other nodes fail to become leaders too.

Subsequent reasoning I'm not that confident in: The ex-leader (docker-rp-1, node 0) lost its leadership under weird circumstances: 1) a non-prevote vote_stm got stuck waiting for all replies for 9 seconds, which I'm not sure how might happen with the default vote timeout of 1500ms; 2) when it got unstuck it was during another voting round, a prevote this time (a 1ms wide "coincidence"! also why on earth two concurrent voting rounds for the same candidate?); 3) the first vote_stm made the node leader, bumping _hbeat to max; 4) the second one (pre-vote) lost the election, so made it a follower without resetting _hbeat to min or to now. As a result, we have a non-leader node with max _hbeat

INFO  2024-07-09 12:09:23,513 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:928 - starting pre-vote leader election, current term: 1, leadership transfer: true
DEBUG 2024-07-09 12:09:23,513 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:3127 - triggering leadership notification with term: 1, new leader: {nullopt}
DEBUG 2024-07-09 12:09:23,513 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1010 - pre-vote phase success: true, current term: 1, leadership transfer: true
TRACE 2024-07-09 12:09:23,513 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:467 - [pre-vote: false] voting for self in term 2
DEBUG 2024-07-09 12:09:23,513 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:3221 - Timeout request election triggered from node {id: 1, revision: 41} at term 1
TRACE 2024-07-09 12:09:23,513 [shard 0:raft] cluster - metadata_dissemination_service.cc:185 - updating {kafka/tp-workload-compaction/14} leadership locally
TRACE 2024-07-09 12:09:23,513 [shard 0:raft] cluster - partition_leaders_table.cc:253 - updated partition: kafka/tp-workload-compaction/14 leader: {term: 1, current leader: {nullopt}, previous leader: {1}, revision: 1338}
TRACE 2024-07-09 12:09:23,513 [shard 1:raft] cluster - partition_leaders_table.cc:253 - updated partition: kafka/tp-workload-compaction/14 leader: {term: 1, current leader: {nullopt}, previous leader: {1}, revision: 1338}
INFO  2024-07-09 12:09:23,521 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:57 - [pre-vote: false] sending vote request to {id: 5, revision: 1073} with timeout of 4500 ms
INFO  2024-07-09 12:09:23,521 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:57 - [pre-vote: false] sending vote request to {id: 6, revision: 1338} with timeout of 4500 ms
DEBUG 2024-07-09 12:09:23,522 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:314 - [pre-vote false] trying to wait for vote replies from all of the nodes
DEBUG 2024-07-09 12:09:23,522 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:314 - [pre-vote false] trying to wait for vote replies from all of the nodes
INFO  2024-07-09 12:09:23,532 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:97 - [pre-vote: false] vote reply from {id: 5, revision: 1073} - {term: 2, target_node: {id: 0, revision: 41}, vote_granted: true, log_ok: true}
DEBUG 2024-07-09 12:09:23,532 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:314 - [pre-vote false] trying to wait for vote replies from all of the nodes
TRACE 2024-07-09 12:09:23,545 [shard 1:lc  ] storage-gc - disk_log_impl.cc:1174 - [{kafka/tp-workload-compaction/14}] house keeping with configuration from manager: {compact:{max_collectible_offset:12804, should_sanitize:{nullopt}}, gc:{eviction_time:{timestamp: 1719922163543}, max_bytes:18446744073709551615}}
DEBUG 2024-07-09 12:09:23,545 [shard 1:lc  ] storage-gc - disk_log_impl.cc:560 - [{kafka/tp-workload-compaction/14}] running sliding window compaction
DEBUG 2024-07-09 12:09:23,545 [shard 1:lc  ] storage-gc - disk_log_impl.cc:563 - [{kafka/tp-workload-compaction/14}] no more segments to compact
DEBUG 2024-07-09 12:09:23,545 [shard 1:lc  ] storage-gc - disk_log_impl.cc:1255 - Adjacent segments of {kafka/tp-workload-compaction/14}, no adjacent pair
INFO  2024-07-09 12:09:31,674 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:928 - starting pre-vote leader election, current term: 2, leadership transfer: false
TRACE 2024-07-09 12:09:31,674 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:467 - [pre-vote: true] voting for self in term 2
INFO  2024-07-09 12:09:31,674 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:57 - [pre-vote: true] sending vote request to {id: 6, revision: 1338} with timeout of 4500 ms
INFO  2024-07-09 12:09:31,674 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:57 - [pre-vote: true] sending vote request to {id: 5, revision: 1073} with timeout of 4500 ms
DEBUG 2024-07-09 12:09:31,674 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:314 - [pre-vote true] trying to wait for vote replies from all of the nodes
DEBUG 2024-07-09 12:09:31,674 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:338 - [pre-vote false] timed out waiting for all the replies
TRACE 2024-07-09 12:09:31,674 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:408 - vote acks in term 2 from: {{id: 0, revision: 41}, {id: 5, revision: 1073}}
INFO  2024-07-09 12:09:31,674 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:418 - becoming the leader term:2
DEBUG 2024-07-09 12:09:31,674 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:2593 - Replicating group configuration {current: {voters: {{id: 0, revision: 41}, {id: 5, revision: 1073}, {id: 6, revision: 1338}}, learners: {}}, old:{nullopt}, revision: 1338, update: {nullopt}, version: 6}}
TRACE 2024-07-09 12:09:31,674 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] replicate_entries_stm.cc:159 - Self append entries - {group: 174, commit_index: 12804, term: 2, prev_log_index: 12805, prev_log_term: 1, last_visible_index: 12804, dirty_offset: 12805}
DEBUG 2024-07-09 12:09:31,674 [shard 1:raft] storage - readers_cache.cc:237 - {kafka/tp-workload-compaction/14} - evicting reader from cache, range [9347,12805]
TRACE 2024-07-09 12:09:31,674 [shard 1:raft] storage - readers_cache.cc:305 - {kafka/tp-workload-compaction/14} - removing reader: [9347,12805] lower_bound: 12805
INFO  2024-07-09 12:09:31,674 [shard 1:raft] storage - segment.cc:806 - Creating new segment /var/lib/redpanda/data/kafka/tp-workload-compaction/14_41/12806-2-v1.log
TRACE 2024-07-09 12:09:31,674 [shard 1:raft] storage - segment_reader.cc:95 - ::get segment file /var/lib/redpanda/data/kafka/tp-workload-compaction/14_41/12806-2-v1.log, refcount=0
DEBUG 2024-07-09 12:09:31,674 [shard 1:raft] storage - segment_reader.cc:99 - Opening segment file /var/lib/redpanda/data/kafka/tp-workload-compaction/14_41/12806-2-v1.log
INFO  2024-07-09 12:09:31,675 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:106 - [pre-vote: false] vote error from {id: 6, revision: 1338} - rpc::errc::client_request_timeout
DEBUG 2024-07-09 12:09:31,675 [shard 1:main] cluster - controller_backend.cc:991 - [{kafka/tp-workload-compaction/14}] reconcilation step, reconciliation state: {pending_notifies: 1,  properties_changed_at: {nullopt}, removed_at: {nullopt}, cur_operation: {{revision: 1338, type: update, assignment: { id: 14, group_id: 174, replicas: {{node_id: 5, shard: 0}, {node
_id: 0, shard: 0}, {node_id: 6, shard: 0}} }, retries: 2, last_error: cluster::errc:19 (Waiting for partition to recover)}}}
TRACE 2024-07-09 12:09:31,675 [shard 1:main] cluster - controller_backend.cc:1005 - [{kafka/tp-workload-compaction/14}] existing partition log_revision: 41
TRACE 2024-07-09 12:09:31,675 [shard 1:main] cluster - controller_backend.cc:1049 - [{kafka/tp-workload-compaction/14}] replicas view: {{orig_replicas: {{node_id: 1, shard: 0}, {node_id: 0, shard: 0}, {node_id: 5, shard: 0}}, last_update_finished_revision: 1118, update: {state: in_progress, update_rev: 1338, last_cmd_rev: 1338, previous: {{node_id: 1, shard: 0}, 
{node_id: 0, shard: 0}, {node_id: 5, shard: 0}}, target: {{node_id: 5, shard: 0}, {node_id: 0, shard: 0}, {node_id: 6, shard: 0}}, policy: target_initial_retention}}}
TRACE 2024-07-09 12:09:31,675 [shard 1:main] cluster - controller_backend.cc:1056 - [{kafka/tp-workload-compaction/14}] placement state on this shard: {current: {{group: 174, log_revision: 41, status: hosted, shard_revision: 104}}, assigned: {{group: 174, log_revision: 41, shard_revision: 104}}, is_initial_for: {nullopt}, next: {nullopt}}
TRACE 2024-07-09 12:09:31,675 [shard 1:main] cluster - controller_backend.cc:175 - [{kafka/tp-workload-compaction/14}] checking if configuration {current: {voters: {{id: 0, revision: 41}, {id: 5, revision: 1073}, {id: 6, revision: 1338}}, learners: {}}, old:{nullopt}, revision: 1338, update: {nullopt}, version: 6}} is up to date with [{node_id: 5, shard: 0}, {nod
e_id: 0, shard: 0}, {node_id: 6, shard: 0}], revision: 1338
TRACE 2024-07-09 12:09:31,675 [shard 1:main] cluster - controller_backend.cc:235 - [{kafka/tp-workload-compaction/14}] current node is a leader, waiting for configuration to be committed
TRACE 2024-07-09 12:09:31,675 [shard 1:main] cluster - controller_backend.cc:916 - [{kafka/tp-workload-compaction/14}] reconciliation attempt finished, state: {pending_notifies: 1,  properties_changed_at: {nullopt}, removed_at: {nullopt}, cur_operation: {{revision: 1338, type: update, assignment: { id: 14, group_id: 174, replicas: {{node_id: 5, shard: 0}, {node_i
d: 0, shard: 0}, {node_id: 6, shard: 0}} }, retries: 3, last_error: cluster::errc:19 (Waiting for partition to recover)}}}
TRACE 2024-07-09 12:09:31,676 [shard 1:raft] storage - segment_reader.cc:122 - ::put segment file /var/lib/redpanda/data/kafka/tp-workload-compaction/14_41/12806-2-v1.log, refcount=1
DEBUG 2024-07-09 12:09:31,676 [shard 1:raft] storage - segment_reader.cc:126 - Closing segment file /var/lib/redpanda/data/kafka/tp-workload-compaction/14_41/12806-2-v1.log
TRACE 2024-07-09 12:09:31,678 [shard 1:raft] tx - [{kafka/tp-workload-compaction/14}] - rm_stm.cc:1663 - taking snapshot with last included offset of: 12804
DEBUG 2024-07-09 12:09:31,678 [shard 1:raft] tx - [{kafka/tp-workload-compaction/14}] - rm_stm.cc:1686 - Removing abort indexes 0 with offset < 0
TRACE 2024-07-09 12:09:31,678 [shard 1:raft] tx - [{kafka/tp-workload-compaction/14}] - rm_stm.cc:1733 - Serializing snapshot { version: 6, producers: 1, aborted transactions: 0, abort indexes: 0 }
TRACE 2024-07-09 12:09:31,678 [shard 1:raft] cluster - [{kafka/tp-workload-compaction/14} (log_eviction_stm.snapshot)] - log_eviction_stm.cc:421 - Taking snapshot at offset: 12804
DEBUG 2024-07-09 12:09:31,678 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:314 - [pre-vote true] trying to wait for vote replies from all of the nodes
TRACE 2024-07-09 12:09:31,679 [shard 1:raft] offset_translator - ntp: {kafka/tp-workload-compaction/14} - offset_translator.cc:76 - adding batch, offsets: [12806,12806], delta: 17
TRACE 2024-07-09 12:09:31,679 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] configuration_manager.cc:161 - Adding configuration at offset 12806 index 17: {current: {voters: {{id: 0, revision: 41}, {id: 5, revision: 1073}, {id: 6, revision: 1338}}, learners: {}}, old:{nullopt}, revision: 1338, update: {nullopt}, version: 6}}
TRACE 2024-07-09 12:09:31,684 [shard 1:main] cluster - partition.cc:715 - local non-log disk size of {kafka/tp-workload-compaction/14} stm rm_stm = 206 bytes
TRACE 2024-07-09 12:09:31,684 [shard 1:main] cluster - partition.cc:715 - local non-log disk size of {kafka/tp-workload-compaction/14} stm log_eviction_stm = 0 bytes
TRACE 2024-07-09 12:09:31,684 [shard 1:main] cluster - partition.cc:723 - local non-log disk size of {kafka/tp-workload-compaction/14}: 206
INFO  2024-07-09 12:09:31,693 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:97 - [pre-vote: true] vote reply from {id: 5, revision: 1073} - {term: 3, target_node: {id: 0, revision: 41}, vote_granted: false, log_ok: true}
INFO  2024-07-09 12:09:31,694 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:97 - [pre-vote: true] vote reply from {id: 6, revision: 1338} - {term: 3, target_node: {id: 0, revision: 41}, vote_granted: false, log_ok: false}
INFO  2024-07-09 12:09:31,696 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:255 - [pre-vote true] vote failed - node {id: 6, revision: 1338} has log which is more up to date than the current candidate
TRACE 2024-07-09 12:09:31,698 [shard 0:main] cluster - partition_leaders_table.cc:253 - updated partition: kafka/tp-workload-compaction/14 leader: {term: 2, current leader: {0}, previous leader: {1}, revision: 1338}
TRACE 2024-07-09 12:09:31,698 [shard 1:main] cluster - partition_leaders_table.cc:253 - updated partition: kafka/tp-workload-compaction/14 leader: {term: 2, current leader: {0}, previous leader: {1}, revision: 1338}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:3115 - Updating follower stats with config {current: {voters: {{id: 0, revision: 41}, {id: 5, revision: 1073}, {id: 6, revision: 1338}}, learners: {}}, old:{nullopt}, revision: 1338, update: {nullopt}, version: 6}}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] replicate_entries_stm.cc:168 - Leader append result: {time_since_append: 31, base_offset: 12806, last_offset: 12806, last_term: 2, byte_size: 154}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] replicate_entries_stm.cc:71 - Sending append entries request {group: 174, commit_index: 12804, term: 2, prev_log_index: 12805, prev_log_term: 1, last_visible_index: 12804, dirty_offset: 12805} to {id: 6, revision: 1338}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] replicate_entries_stm.cc:71 - Sending append entries request {group: 174, commit_index: 12804, term: 2, prev_log_index: 12805, prev_log_term: 1, last_visible_index: 12804, dirty_offset: 12805} to {id: 5, revision: 1073}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1712 - Received vote request: {node_id: {id: 5, revision: 1073}, target_node_id: {id: 0, revision: 41}, group: 174, term: 2, prev_log_index: 12805, prev_log_term: 1, leadership_xfer: false}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1763 - Already heard from the leader, not granting vote to node {id: 5, revision: 1073}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1660 - vote reply: {term: 2, target_node: {id: 5, revision: 1073}, vote_granted: false, log_ok: false}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1712 - Received vote request: {node_id: {id: 5, revision: 1073}, target_node_id: {id: 0, revision: 41}, group: 174, term: 3, prev_log_index: 12805, prev_log_term: 1, leadership_xfer: false}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1763 - Already heard from the leader, not granting vote to node {id: 5, revision: 1073}
TRACE 2024-07-09 12:09:31,706 [shard 1:raft] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1660 - vote reply: {term: 2, target_node: {id: 5, revision: 1073}, vote_granted: false, log_ok: false}
INFO  2024-07-09 12:09:31,706 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] vote_stm.cc:354 - [pre-vote true] vote failed - received larger term: 3
DEBUG 2024-07-09 12:09:31,706 [shard 1:main] raft - [group_id:174, {kafka/tp-workload-compaction/14}] consensus.cc:1010 - pre-vote phase success: false, current term: 3, leadership transfer: false
bashtanov commented 2 months ago

fixed in https://github.com/redpanda-data/redpanda/pull/21361