redpanda-data / redpanda

Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!
https://redpanda.com
9.64k stars 587 forks source link

Failed to allocate 6553592 bytes #10238

Closed kargh closed 1 year ago

kargh commented 1 year ago

Version & Environment

Redpanda version: v23.1.4

What went wrong?

Node 0:

Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: ERROR 2023-04-20 14:33:13,958 [shard  0] seastar_memory - Dumping seastar memory diagnostics
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: Used memory:  664M
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: Free memory:  3074M
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: Total memory: 4G
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: Small pools:
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: objsz        spansz        usedobj        memory        unused        wst%
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 8        4K        148        40K        39K        97
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 10        4K        1        8K        8K        99
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 12        4K        9        20K        20K        99
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 14        4K        2        8K        8K        99
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 16        4K        8k        184K        66K        35
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 32        4K        6k        292K        92K        31
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 32        4K        107k        3M        47K        1
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 32        4K        2k        116K        51K        44
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 32        4K        3k        2M        2216K        96
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 48        4K        6k        308K        23K        7
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 48        4K        3k        2M        2063K        93
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 64        4K        5k        392K        50K        12
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 64        4K        55k        7M        3609K        51
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 80        4K        5k        508K        135K        26
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 96        4K        119k        11M        10K        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 112        4K        108k        31M        20M        63
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 128        4K        85k        22M        11M        51
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 160        4K        4k        700K        122K        17
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 192        4K        447        124K        40K        32
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 224        4K        766        184K        16K        8
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 256        4K        826        2M        2090K        91
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 320        8K        934        4M        3564K        92
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 384        8K        537        248K        47K        18
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 448        4K        669        356K        63K        17
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 512        4K        33        372K        356K        95
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 640        16K        63        144K        104K        72
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 768        16K        615        752K        290K        38
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 896        8K        129        272K        158K        58
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 1024        4K        15        312K        297K        95
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 1280        32K        824        5M        3738K        78
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 1536        32K        22        2M        2M        97
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 1792        16K        16        960K        931K        96
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 2048        8K        61        744K        622K        83
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 2560        64K        63        3M        2978K        94
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 3072        64K        73        2M        2M        89
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 3584        32K        11        896K        858K        95
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 4096        16K        200        2M        944K        54
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 5120        128K        3        640K        625K        97
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 6144        128K        13        2M        2M        96
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 7168        64K        4        2M        2M        98
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 8192        32K        5k        44M        7M        16
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 10240        64K        20        3M        3M        93
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 12288        64K        16        3M        3M        93
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 14336        128K        7        5M        5M        97
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 16384        64K        24k        377M        3M        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: Page spans:
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: index        size        free        used        spans
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0        4K        38M        85M        31k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 1        8K        60M        5M        8k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 2        16K        65M        4M        4k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 3        32K        189M        54M        8k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 4        64K        323M        390M        11k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 5        128K        578M        9M        5k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 6        256K        832M        18M        3k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 7        512K        724M        2M        1k
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 8        1M        212M        1M        213
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 9        2M        46M        6M        26
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 10        4M        8M        12M        5
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 11        8M        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 12        16M        0B        48M        3
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 13        32M        0B        32M        1
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 14        64M        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 15        128M        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 16        256M        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 17        512M        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 18        1G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 19        2G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 20        4G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 21        8G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 22        16G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 23        32G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 24        64G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 25        128G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 26        256G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 27        512G        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 28        1T        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 29        2T        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 30        4T        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 31        8T        0B        0B        0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: ERROR 2023-04-20 14:33:13,972 [shard  0] seastar - Failed to allocate 6553592 bytes
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: Aborting on shard 0.
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: Backtrace:
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5ccc766
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5d30162
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: /opt/redpanda/lib/libc.so.6+0x42abf
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: /opt/redpanda/lib/libc.so.6+0x92e3b
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: /opt/redpanda/lib/libc.so.6+0x42a15
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: /opt/redpanda/lib/libc.so.6+0x2c82e
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5c3f676
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5c4dd71
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x4afe8a0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x4afe7fa
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x4a3c005
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x4a9d6e4
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x4a2f0a7
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x4a2e18b
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x4923dbb
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x2f9d4b8
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x2f186cf
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x2f1e4a0
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5cea89f
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5cee577
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5ceb949
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5c0fbf1
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x5c0dd0f
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x1e0f86e
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x6000359
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: /opt/redpanda/lib/libc.so.6+0x2d58f
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: /opt/redpanda/lib/libc.so.6+0x2d648
Apr 20 14:33:13 srv-01-03-404.iad1.trmr.io rpk[15248]: 0x1e09a24
Apr 20 14:33:14 srv-01-03-404.iad1.trmr.io systemd[1]: redpanda.service: main process exited, code=killed, status=6/ABRT

What should have happened instead?

Ideally it would allocate the memory.

kargh commented 1 year ago

4 out of 5 nodes crashed at the same time.

kargh commented 1 year ago

Node 1:

Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: ERROR 2023-04-20 14:33:17,176 [shard  0] seastar_memory - Dumping seastar memory diagnostics
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: Used memory:  659M
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: Free memory:  3079M
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: Total memory: 4G
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: Small pools:
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: objsz        spansz        usedobj        memory        unused        wst%
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 8        4K        144        36K        35K        96
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 10        4K        1        8K        8K        99
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 12        4K        9        16K        16K        99
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 14        4K        2        8K        8K        99
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 16        4K        8k        176K        56K        32
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 32        4K        6k        276K        73K        26
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 32        4K        107k        3M        70K        2
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 32        4K        2k        112K        47K        42
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 32        4K        3k        2M        1824K        95
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 48        4K        6k        312K        27K        8
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 48        4K        4k        3M        3092K        94
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 64        4K        6k        424K        79K        18
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 64        4K        55k        8M        5193K        60
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 80        4K        5k        544K        170K        31
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 96        4K        118k        11M        83K        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 112        4K        108k        32M        20M        63
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 128        4K        85k        21M        11M        51
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 160        4K        4k        744K        154K        20
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 192        4K        191        88K        52K        59
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 224        4K        767        184K        16K        8
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 256        4K        990        2M        2241K        90
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 320        8K        929        4M        3430K        92
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 384        8K        559        280K        70K        25
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 448        4K        677        328K        32K        9
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 512        4K        40        388K        368K        94
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 640        16K        62        144K        105K        72
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 768        16K        629        720K        248K        34
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 896        8K        301        424K        160K        37
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 1024        4K        59        336K        277K        82
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 1280        32K        842        5M        3619K        77
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 1536        32K        78        2M        2M        93
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 1792        16K        17        832K        802K        96
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 2048        8K        62        632K        508K        80
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 2560        64K        49        2M        1478K        92
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 3072        64K        79        2M        2M        89
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 3584        32K        18        2M        2M        96
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 4096        16K        199        2M        996K        55
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 5120        128K        11        2M        2M        97
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 6144        128K        13        2M        2M        95
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 7168        64K        7        3M        3M        98
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 8192        32K        4k        37M        5M        14
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 10240        64K        19        3M        3M        93
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 12288        64K        16        3M        3M        93
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 14336        128K        5        4M        4M        98
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 16384        64K        24k        377M        3M        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: Page spans:
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: index        size        free        used        spans
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0        4K        39M        87M        32k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 1        8K        60M        5M        8k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 2        16K        63M        3M        4k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 3        32K        181M        47M        7k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 4        64K        326M        389M        11k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 5        128K        561M        8M        5k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 6        256K        794M        18M        3k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 7        512K        741M        2M        1k
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 8        1M        268M        1M        269
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 9        2M        30M        6M        18
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 10        4M        16M        12M        7
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 11        8M        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 12        16M        0B        48M        3
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 13        32M        0B        32M        1
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 14        64M        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 15        128M        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 16        256M        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 17        512M        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 18        1G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 19        2G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 20        4G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 21        8G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 22        16G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 23        32G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 24        64G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 25        128G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 26        256G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 27        512G        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 28        1T        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 29        2T        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 30        4T        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 31        8T        0B        0B        0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: ERROR 2023-04-20 14:33:17,188 [shard  0] seastar - Failed to allocate 6553592 bytes
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: Aborting on shard 0.
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: Backtrace:
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5ccc766
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5d30162
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: /opt/redpanda/lib/libc.so.6+0x42abf
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: /opt/redpanda/lib/libc.so.6+0x92e3b
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: /opt/redpanda/lib/libc.so.6+0x42a15
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: /opt/redpanda/lib/libc.so.6+0x2c82e
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5c3f676
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5c4dd71
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x4afe8a0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x4afe7fa
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x4a3c005
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x4a9d6e4
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x4a2f0a7
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x4a2e18b
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x4923dbb
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x2f9d4b8
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x2f186cf
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x2f1e4a0
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5cea89f
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5cee577
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5ceb949
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5c0fbf1
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x5c0dd0f
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x1e0f86e
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x6000359
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: /opt/redpanda/lib/libc.so.6+0x2d58f
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: /opt/redpanda/lib/libc.so.6+0x2d648
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io rpk[26513]: 0x1e09a24
Apr 20 14:33:17 srv-01-03-405.iad1.trmr.io systemd[1]: redpanda.service: main process exited, code=killed, status=6/ABRT
kargh commented 1 year ago

Node 4:

Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: ERROR 2023-04-20 14:33:15,934 [shard  8] seastar_memory - Dumping seastar memory diagnostics
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: Used memory:  626M
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: Free memory:  3112M
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: Total memory: 4G
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: Small pools:
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: objsz        spansz        usedobj        memory        unused        wst%
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 8        4K        231        36K        34K        94
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 10        4K        1        8K        8K        99
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 12        4K        2        12K        12K        99
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 14        4K        2        8K        8K        99
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 16        4K        7k        392K        287K        73
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 32        4K        5k        156K        10K        6
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 32        4K        5k        184K        24K        13
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 32        4K        2k        56K        6K        10
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 32        4K        3k        2M        2221K        95
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 48        4K        2k        104K        26K        24
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 48        4K        4k        3M        3034K        94
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 64        4K        5k        380K        66K        17
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 64        4K        46k        7M        4452K        60
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 80        4K        5k        476K        118K        24
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 96        4K        219k        20M        76K        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 112        4K        78k        30M        22M        72
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 128        4K        58k        11M        3791K        34
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 160        4K        3k        524K        61K        11
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 192        4K        51        48K        38K        80
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 224        4K        693        160K        8K        5
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 256        4K        857        2M        2326K        91
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 320        8K        849        4M        3431K        92
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 384        8K        532        240K        41K        16
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 448        4K        662        360K        70K        19
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 512        4K        93        436K        390K        89
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 640        16K        63        192K        153K        79
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 768        16K        619        704K        239K        33
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 896        8K        86        200K        124K        62
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 1024        4K        11        316K        305K        96
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 1280        32K        773        5M        4218K        81
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 1536        32K        19        1M        1187K        97
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 1792        16K        11        448K        429K        95
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 2048        8K        61        448K        326K        72
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 2560        64K        40        704K        603K        85
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 3072        64K        11        1M        1M        97
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 3584        32K        8        768K        739K        96
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 4096        16K        194        2M        840K        51
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 5120        128K        3        640K        625K        97
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 6144        128K        11        2M        2M        96
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 7168        64K        4        2M        2M        98
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 8192        32K        3k        31M        6M        19
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 10240        64K        19        3M        3M        93
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 12288        64K        17        3M        3M        93
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 14336        128K        6        4M        4M        97
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 16384        64K        24k        377M        4M        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: Page spans:
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: index        size        free        used        spans
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0        4K        39M        80M        30k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 1        8K        59M        4M        8k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 2        16K        65M        3M        4k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 3        32K        197M        40M        8k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 4        64K        371M        387M        12k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 5        128K        654M        7M        5k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 6        256K        883M        2M        4k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 7        512K        633M        1M        1k
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 8        1M        180M        2M        182
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 9        2M        24M        4M        14
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 10        4M        8M        16M        6
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 11        8M        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 12        16M        0B        48M        3
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 13        32M        0B        32M        1
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 14        64M        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 15        128M        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 16        256M        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 17        512M        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 18        1G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 19        2G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 20        4G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 21        8G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 22        16G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 23        32G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 24        64G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 25        128G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 26        256G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 27        512G        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 28        1T        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 29        2T        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 30        4T        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 31        8T        0B        0B        0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: ERROR 2023-04-20 14:33:15,939 [shard  8] seastar - Failed to allocate 6553592 bytes
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: Aborting on shard 8.
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: Backtrace:
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5ccc766
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5d30162
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: /opt/redpanda/lib/libc.so.6+0x42abf
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: /opt/redpanda/lib/libc.so.6+0x92e3b
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: /opt/redpanda/lib/libc.so.6+0x42a15
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: /opt/redpanda/lib/libc.so.6+0x2c82e
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5c3f676
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5c4dd71
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x4afe8a0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x4afe7fa
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x4a3c005
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x4a9d6e4
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x4a2f0a7
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x4a2e18b
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x4923dbb
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x2f9d4b8
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x2f186cf
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x2f1e4a0
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5cea89f
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5cee577
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5d318c5
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: 0x5c8c66f
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: /opt/redpanda/lib/libc.so.6+0x91016
Apr 20 14:33:15 srv-01-04-405.iad1.trmr.io rpk[21800]: /opt/redpanda/lib/libc.so.6+0x1166cf
Apr 20 14:33:16 srv-01-04-405.iad1.trmr.io systemd[1]: redpanda.service: main process exited, code=killed, status=6/ABRT
kargh commented 1 year ago

Node 2 - Don't know why this one crashed:

Apr 20 14:33:04 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:04,798 [shard 58] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/rxdata-dspbidrequest/28_90/669721533-45-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:45, base_offset:  669721533, committed_offset:669853885, dirty_offset:669853885}, compacted_segment=0, finished_self_compaction=0, generation={87677}, reader={/var/lib/redpanda/data/kafka/rxdata-dspbidrequest/28_90/669721533-45-v1.log, (138382057 bytes)}, writer=nullptr, cache={cache_size=0}, compactio  n_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/rxdata-dspbidrequest/28_90/669721533-45-v1.base_index, offsets:{669721533}, index:{header_bitflags:0, base_offset:{669721533}, max_offset:{669853885}, base_timestamp:{timestamp: 1681841669208}, max_timest  amp:{timestamp: 1681842774227}, batch_timestamps_are_monotonic:0, index(3971,3971,3971)}, step:32768, needs_persistence:0}})
Apr 20 14:33:05 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:05,278 [shard 27] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/rxdata-videoevent/15_130/984057511-57-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:57, base_offset:98  4057511, committed_offset:984319063, dirty_offset:984319063}, compacted_segment=0, finished_self_compaction=0, generation={308009}, reader={/var/lib/redpanda/data/kafka/rxdata-videoevent/15_130/984057511-57-v1.log, (128178323 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_i  ndex:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/rxdata-videoevent/15_130/984057511-57-v1.base_index, offsets:{984057511}, index:{header_bitflags:0, base_offset:{984057511}, max_offset:{984319063}, base_timestamp:{timestamp: 1681840910893}, max_timestamp:{  timestamp: 1681842785000}, batch_timestamps_are_monotonic:0, index(3860,3860,3860)}, step:32768, needs_persistence:0}})
Apr 20 14:33:06 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:06,173 [shard 60] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-pubopportunity/20_112/270981794-45-v1.log
Apr 20 14:33:08 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:08,328 [shard 17] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/rx-usersync-us/4_78/6075080499-59-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:59, base_offset:607508  0499, committed_offset:6075342108, dirty_offset:6075342108}, compacted_segment=0, finished_self_compaction=0, generation={7742}, reader={/var/lib/redpanda/data/kafka/rx-usersync-us/4_78/6075080499-59-v1.log, (132218631 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nu  llopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/rx-usersync-us/4_78/6075080499-59-v1.base_index, offsets:{6075080499}, index:{header_bitflags:0, base_offset:{6075080499}, max_offset:{6075342108}, base_timestamp:{timestamp: 1681928795082}, max_timestamp:{timestam  p: 1681929176763}, batch_timestamps_are_monotonic:1, index(2458,2458,2458)}, step:32768, needs_persistence:0}})
Apr 20 14:33:08 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:08,384 [shard 43] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-impconfirmation/18_96/19527098-62-v1.log
Apr 20 14:33:13 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:13,255 [shard 51] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/rxdata-thirdpartybeacon/9_126/409720700-45-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:45, base_offs  et:409720700, committed_offset:410036141, dirty_offset:410036141}, compacted_segment=0, finished_self_compaction=0, generation={494933}, reader={/var/lib/redpanda/data/kafka/rxdata-thirdpartybeacon/9_126/409720700-45-v1.log, (139586511 bytes)}, writer=nullptr, cache={cache_size=0}, co  mpaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/rxdata-thirdpartybeacon/9_126/409720700-45-v1.base_index, offsets:{409720700}, index:{header_bitflags:0, base_offset:{409720700}, max_offset:{410036141}, base_timestamp:{timestamp: 1681838834016},   max_timestamp:{timestamp: 1681842785646}, batch_timestamps_are_monotonic:0, index(4224,4224,4224)}, step:32768, needs_persistence:0}})
Apr 20 14:33:13 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:13,727 [shard  8] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.48, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407055, 1 in flight, time since: {init: 100 ms, en  queue: 99 ms, dispatch: 99 ms, written: 99 ms}, flushed: true
Apr 20 14:33:13 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:13,927 [shard  8] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.48, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407056, 1 in flight, time since: {init: 100 ms, en  queue: 99 ms, dispatch: 99 ms, written: 99 ms}, flushed: true
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,128 [shard  8] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.48, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407057, 1 in flight, time since: {init: 100 ms, en  queue: 99 ms, dispatch: 99 ms, written: 99 ms}, flushed: true
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,329 [shard  8] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.48, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407058, 1 in flight, time since: {init: 100 ms, en  queue: 99 ms, dispatch: 99 ms, written: 99 ms}, flushed: true
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,383 [shard  4] rpc - server.cc:159 - Disconnected 10.16.67.48:58500 (applying protocol, Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,437 [shard  8] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,443 [shard 53] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,443 [shard  4] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,445 [shard 56] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,445 [shard 58] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,445 [shard 30] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,446 [shard 21] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,447 [shard 12] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,447 [shard  8] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,447 [shard  8] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,447 [shard 53] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,447 [shard  4] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,447 [shard 30] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,448 [shard 58] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,448 [shard 56] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,448 [shard 21] rpc - Disconnected from server {host: 10.16.67.48, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:14 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:14,707 [shard 59] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/rx-bidrequest-stream/7_54/12888796706-61-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:61, base_offset  :12888796706, committed_offset:12889757505, dirty_offset:12889757505}, compacted_segment=0, finished_self_compaction=0, generation={375528}, reader={/var/lib/redpanda/data/kafka/rx-bidrequest-stream/7_54/12888796706-61-v1.log, (140526349 bytes)}, writer=nullptr, cache={cache_size=0},   compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/rx-bidrequest-stream/7_54/12888796706-61-v1.base_index, offsets:{12888796706}, index:{header_bitflags:0, base_offset:{12888796706}, max_offset:{12889757505}, base_timestamp:{timestamp: 1681842252  000}, max_timestamp:{timestamp: 1681842784233}, batch_timestamps_are_monotonic:0, index(4234,4234,4234)}, step:32768, needs_persistence:0}})
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,376 [shard 12] raft - [group_id:157, {kafka/pixelverse_conv_diagnostic/2}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {42}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,377 [shard 12] raft - [group_id:157, {kafka/pixelverse_conv_diagnostic/2}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {42}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,387 [shard 12] raft - [group_id:157, {kafka/pixelverse_conv_diagnostic/2}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {42}} - {term:{63}, target_node_id{id: {2}, revision: {42}}, vote_granted: 1  , log_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,387 [shard 12] raft - [group_id:157, {kafka/pixelverse_conv_diagnostic/2}] vote_stm.cc:264 - becoming the leader term:63
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,388 [shard 12] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_diagnostic/2_42/61-63-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,411 [shard 57] raft - [group_id:658, {kafka/rx-uid-map/23}] vote_stm.cc:52 - Sending vote request to {id: {4}, revision: {74}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,411 [shard 57] raft - [group_id:658, {kafka/rx-uid-map/23}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {74}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,421 [shard 57] raft - [group_id:658, {kafka/rx-uid-map/23}] vote_stm.cc:77 - vote reply from {id: {4}, revision: {74}} - {term:{52}, target_node_id{id: {2}, revision: {74}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,421 [shard 57] raft - [group_id:658, {kafka/rx-uid-map/23}] vote_stm.cc:264 - becoming the leader term:52
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,422 [shard 57] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-uid-map/23_74/10084069-52-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,435 [shard 12] raft - [group_id:157, {kafka/pixelverse_conv_diagnostic/2}] vote_stm.cc:279 - became the leader term: 63
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,445 [shard 30] raft - [group_id:400, {kafka/rx-cm-tremor/5}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {58}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,445 [shard 30] raft - [group_id:400, {kafka/rx-cm-tremor/5}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {58}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,456 [shard 30] raft - [group_id:400, {kafka/rx-cm-tremor/5}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {58}} - {term:{62}, target_node_id{id: {2}, revision: {58}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,456 [shard 30] raft - [group_id:400, {kafka/rx-cm-tremor/5}] vote_stm.cc:264 - becoming the leader term:62
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,456 [shard 30] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-cm-tremor/5_58/61-62-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,493 [shard 19] raft - [group_id:595, {kafka/rx-fc-iad/20}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {70}}, received 54, current 53
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,500 [shard  6] raft - [group_id:254, {kafka/rx-aerokeys-devicedata/9}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {48}}, received 59, current 58
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,500 [shard 30] raft - [group_id:400, {kafka/rx-cm-tremor/5}] vote_stm.cc:279 - became the leader term: 62
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,513 [shard 57] raft - [group_id:658, {kafka/rx-uid-map/23}] vote_stm.cc:279 - became the leader term: 52
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,540 [shard 42] raft - [group_id:739, {kafka/rxdata-bidrequest/13}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {82}}, received 68, current 67
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,544 [shard  6] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-aerokeys-devicedata/9_48/56-59-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,548 [shard 19] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-fc-iad/20_70/9762-54-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,604 [shard 42] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-bidrequest/13_82/340314061-68-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,624 [shard 63] raft - [group_id:882, {kafka/rxdata-dvviewability/6}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {92}}, received 61, current 60
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,633 [shard 35] raft - [group_id:408, {kafka/rx-cm-tremor/13}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {58}}, received 62, current 61
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,641 [shard 14] raft - [group_id:800, {kafka/rxdata-bidresponse_extra/14}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {86}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,641 [shard 14] raft - [group_id:800, {kafka/rxdata-bidresponse_extra/14}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {86}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,652 [shard 14] raft - [group_id:800, {kafka/rxdata-bidresponse_extra/14}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {86}} - {term:{67}, target_node_id{id: {2}, revision: {86}}, vote_granted: 1,   log_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,652 [shard 14] raft - [group_id:800, {kafka/rxdata-bidresponse_extra/14}] vote_stm.cc:264 - becoming the leader term:67
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,652 [shard 14] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-bidresponse_extra/14_86/5493349-67-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,677 [shard 63] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-dvviewability/6_92/4667027-61-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,679 [shard 34] raft - [group_id:88, {kafka/dmp-cookie-sync/23}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {36}}, received 70, current 69
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,718 [shard 54] raft - [group_id:227, {kafka/pixelverse_conv_us/12}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {46}}, received 62, current 61
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,737 [shard 51] raft - [group_id:434, {kafka/rx-confiant-request-ams/9}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {60}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,737 [shard 51] raft - [group_id:434, {kafka/rx-confiant-request-ams/9}] vote_stm.cc:52 - Sending vote request to {id: {4}, revision: {60}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,747 [shard 51] raft - [group_id:434, {kafka/rx-confiant-request-ams/9}] vote_stm.cc:77 - vote reply from {id: {4}, revision: {60}} - {term:{63}, target_node_id{id: {2}, revision: {60}}, vote_granted: 1, l  og_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,747 [shard 51] raft - [group_id:434, {kafka/rx-confiant-request-ams/9}] vote_stm.cc:264 - becoming the leader term:63
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,748 [shard 51] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-ams/9_60/1871916-63-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,766 [shard 48] raft - [group_id:537, {kafka/rx-confiant-request-usw/22}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {66}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,766 [shard 48] raft - [group_id:537, {kafka/rx-confiant-request-usw/22}] vote_stm.cc:52 - Sending vote request to {id: {4}, revision: {66}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,771 [shard 35] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-cm-tremor/13_58/61-62-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,776 [shard 48] raft - [group_id:537, {kafka/rx-confiant-request-usw/22}] vote_stm.cc:77 - vote reply from {id: {4}, revision: {66}} - {term:{60}, target_node_id{id: {2}, revision: {66}}, vote_granted: 1,   log_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,776 [shard 48] raft - [group_id:537, {kafka/rx-confiant-request-usw/22}] vote_stm.cc:264 - becoming the leader term:60
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,777 [shard 48] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-usw/22_66/6037497-60-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,788 [shard 13] raft - [group_id:158, {kafka/pixelverse_conv_diagnostic/3}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {42}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,788 [shard 13] raft - [group_id:158, {kafka/pixelverse_conv_diagnostic/3}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {42}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,792 [shard 34] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/dmp-cookie-sync/23_36/5475726-70-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,799 [shard 13] raft - [group_id:158, {kafka/pixelverse_conv_diagnostic/3}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {42}} - {term:{70}, target_node_id{id: {2}, revision: {42}}, vote_granted: 1  , log_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,799 [shard 13] raft - [group_id:158, {kafka/pixelverse_conv_diagnostic/3}] vote_stm.cc:264 - becoming the leader term:70
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,799 [shard 13] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_diagnostic/3_42/68-70-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,802 [shard 54] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_us/12_46/60-62-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,803 [shard 14] raft - [group_id:800, {kafka/rxdata-bidresponse_extra/14}] vote_stm.cc:279 - became the leader term: 67
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,817 [shard 46] raft - [group_id:319, {kafka/rx-aerokeys-userdata/14}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {52}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,817 [shard 46] raft - [group_id:319, {kafka/rx-aerokeys-userdata/14}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {52}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,828 [shard 46] raft - [group_id:319, {kafka/rx-aerokeys-userdata/14}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {52}} - {term:{60}, target_node_id{id: {2}, revision: {52}}, vote_granted: 1, log  _ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,828 [shard 46] raft - [group_id:319, {kafka/rx-aerokeys-userdata/14}] vote_stm.cc:264 - becoming the leader term:60
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,828 [shard 46] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-aerokeys-userdata/14_52/58-60-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,844 [shard 48] raft - [group_id:111, {kafka/geouniques/16}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {38}}, received 72, current 71
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,849 [shard 59] raft - [group_id:661, {kafka/rx-uid-map/26}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {74}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,849 [shard 59] raft - [group_id:661, {kafka/rx-uid-map/26}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {74}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,859 [shard 59] raft - [group_id:661, {kafka/rx-uid-map/26}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {74}} - {term:{48}, target_node_id{id: {2}, revision: {74}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,859 [shard 59] raft - [group_id:661, {kafka/rx-uid-map/26}] vote_stm.cc:264 - becoming the leader term:48
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,860 [shard 59] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-uid-map/26_74/10069907-48-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:15,869 [shard  0] cluster - metadata_dissemination_service.cc:486 - Error sending metadata update rpc::errc::exponential_backoff to 0
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,869 [shard 13] raft - [group_id:158, {kafka/pixelverse_conv_diagnostic/3}] vote_stm.cc:279 - became the leader term: 70
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,873 [shard 23] raft - [group_id:70, {kafka/dmp-cookie-sync/5}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {36}}, received 70, current 69
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,878 [shard 49] raft - [group_id:219, {kafka/pixelverse_conv_us/4}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {46}}, received 57, current 56
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,906 [shard 48] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/geouniques/16_38/138-72-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,911 [shard 63] raft - [group_id:241, {kafka/pixelverse_conv_us/26}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {46}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,911 [shard 63] raft - [group_id:241, {kafka/pixelverse_conv_us/26}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {46}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,921 [shard 63] raft - [group_id:241, {kafka/pixelverse_conv_us/26}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {46}} - {term:{67}, target_node_id{id: {2}, revision: {46}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,921 [shard 63] raft - [group_id:241, {kafka/pixelverse_conv_us/26}] vote_stm.cc:264 - becoming the leader term:67
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,922 [shard 63] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_us/26_46/66-67-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,930 [shard 35] raft - [group_id:196, {kafka/pixelverse_conv_eu/11}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {44}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,930 [shard 35] raft - [group_id:196, {kafka/pixelverse_conv_eu/11}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {44}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,931 [shard  7] raft - [group_id:11, {kafka/__consumer_offsets/10}] consensus.cc:1625 - Received vote request with larger term from node {id: {4}, revision: {11}}, received 72, current 71
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,931 [shard 48] raft - [group_id:537, {kafka/rx-confiant-request-usw/22}] vote_stm.cc:279 - became the leader term: 60
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,941 [shard 35] raft - [group_id:196, {kafka/pixelverse_conv_eu/11}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {44}} - {term:{64}, target_node_id{id: {2}, revision: {44}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,941 [shard 35] raft - [group_id:196, {kafka/pixelverse_conv_eu/11}] vote_stm.cc:264 - becoming the leader term:64
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,941 [shard 35] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_eu/11_44/63-64-v1.log
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,955 [shard 46] raft - [group_id:319, {kafka/rx-aerokeys-userdata/14}] vote_stm.cc:279 - became the leader term: 60
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,970 [shard 16] raft - [group_id:483, {kafka/rx-confiant-request-ap/28}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {62}}, received 54, current 53
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,970 [shard 59] raft - [group_id:661, {kafka/rx-uid-map/26}] vote_stm.cc:279 - became the leader term: 48
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,995 [shard 11] raft - [group_id:1329, {kafka/rxdata-sspbidresponse/3}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {122}} with timeout 1500
Apr 20 14:33:15 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:15,995 [shard 11] raft - [group_id:1329, {kafka/rxdata-sspbidresponse/3}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {122}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,005 [shard 11] raft - [group_id:1329, {kafka/rxdata-sspbidresponse/3}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {122}} - {term:{48}, target_node_id{id: {2}, revision: {122}}, vote_granted: 1,   log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,006 [shard 11] raft - [group_id:1329, {kafka/rxdata-sspbidresponse/3}] vote_stm.cc:264 - becoming the leader term:48
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,006 [shard 11] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-sspbidresponse/3_122/55103227-48-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,024 [shard 49] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_us/4_46/55-57-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,051 [shard 14] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.51, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407143, 1 in flight, time since: {init: 100 ms, en  queue: 99 ms, dispatch: 99 ms, written: 99 ms}, flushed: true
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,072 [shard 11] raft - [group_id:1329, {kafka/rxdata-sspbidresponse/3}] vote_stm.cc:279 - became the leader term: 48
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,087 [shard 45] raft - [group_id:532, {kafka/rx-confiant-request-usw/17}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {66}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,087 [shard 45] raft - [group_id:532, {kafka/rx-confiant-request-usw/17}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {66}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,097 [shard 45] raft - [group_id:532, {kafka/rx-confiant-request-usw/17}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {66}} - {term:{58}, target_node_id{id: {2}, revision: {66}}, vote_granted: 1,   log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,097 [shard 45] raft - [group_id:532, {kafka/rx-confiant-request-usw/17}] vote_stm.cc:264 - becoming the leader term:58
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,097 [shard 27] raft - [group_id:716, {kafka/rx-usersync-us/21}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {78}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,097 [shard 27] raft - [group_id:716, {kafka/rx-usersync-us/21}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {78}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,098 [shard 45] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-usw/17_66/6037551-58-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,101 [shard 35] raft - [group_id:196, {kafka/pixelverse_conv_eu/11}] vote_stm.cc:279 - became the leader term: 64
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,108 [shard 27] raft - [group_id:716, {kafka/rx-usersync-us/21}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {78}} - {term:{51}, target_node_id{id: {2}, revision: {78}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,108 [shard 27] raft - [group_id:716, {kafka/rx-usersync-us/21}] vote_stm.cc:264 - becoming the leader term:51
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,108 [shard 27] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-usersync-us/21_78/6116763270-51-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,139 [shard 54] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/pixelverse_conv_us/12_46/59-61-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:61, base_offset:59, commi  tted_offset:59, dirty_offset:59}, compacted_segment=0, finished_self_compaction=0, generation={4}, reader={/var/lib/redpanda/data/kafka/pixelverse_conv_us/12_46/59-61-v1.log, (330 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={fi  le:/var/lib/redpanda/data/kafka/pixelverse_conv_us/12_46/59-61-v1.base_index, offsets:{59}, index:{header_bitflags:0, base_offset:{59}, max_offset:{59}, base_timestamp:{timestamp: 1680360140418}, max_timestamp:{timestamp: 1680360140418}, batch_timestamps_are_monotonic:1, index(1,1,1)}  , step:32768, needs_persistence:0}})
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,152 [shard 59] raft - [group_id:234, {kafka/pixelverse_conv_us/19}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {46}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,152 [shard 59] raft - [group_id:234, {kafka/pixelverse_conv_us/19}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {46}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,162 [shard 59] raft - [group_id:234, {kafka/pixelverse_conv_us/19}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {46}} - {term:{62}, target_node_id{id: {2}, revision: {46}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,162 [shard 59] raft - [group_id:234, {kafka/pixelverse_conv_us/19}] vote_stm.cc:264 - becoming the leader term:62
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,163 [shard 59] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_us/19_46/61-62-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,168 [shard 32] raft - [group_id:190, {kafka/pixelverse_conv_eu/5}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {44}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,168 [shard 32] raft - [group_id:190, {kafka/pixelverse_conv_eu/5}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {44}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,172 [shard 32] raft - [group_id:190, {kafka/pixelverse_conv_eu/5}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {44}} - {term:{69}, target_node_id{id: {2}, revision: {44}}, vote_granted: 1, log_ok  :1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,172 [shard 32] raft - [group_id:190, {kafka/pixelverse_conv_eu/5}] vote_stm.cc:264 - becoming the leader term:69
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,173 [shard 32] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_eu/5_44/68-69-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,212 [shard 53] raft - [group_id:439, {kafka/rx-confiant-request-ams/14}] consensus.cc:1625 - Received vote request with larger term from node {id: {1}, revision: {60}}, received 61, current 60
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,225 [shard 11] raft - [group_id:19, {kafka_internal/group/2}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {12}}, received 63, current 62
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,238 [shard 11] raft - [group_id:581, {kafka/rx-fc-iad/6}] consensus.cc:1625 - Received vote request with larger term from node {id: {1}, revision: {70}}, received 55, current 54
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,248 [shard 15] raft - [group_id:909, {kafka/rxdata-impbeacon/3}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {94}}, received 54, current 53
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,251 [shard 14] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.51, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407144, 1 in flight, time since: {init: 100 ms, en  queue: 99 ms, dispatch: 99 ms, written: 99 ms}, flushed: true
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,269 [shard 59] raft - [group_id:234, {kafka/pixelverse_conv_us/19}] vote_stm.cc:279 - became the leader term: 62
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,272 [shard 16] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-ap/28_62/1958610-54-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,276 [shard 11] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-fc-iad/6_70/10230-55-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,288 [shard 43] raft - [group_id:422, {kafka/rx-cm-tremor/27}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {58}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,288 [shard 43] raft - [group_id:422, {kafka/rx-cm-tremor/27}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {58}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,299 [shard 43] raft - [group_id:422, {kafka/rx-cm-tremor/27}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {58}} - {term:{50}, target_node_id{id: {2}, revision: {58}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,299 [shard 43] raft - [group_id:422, {kafka/rx-cm-tremor/27}] vote_stm.cc:264 - becoming the leader term:50
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,300 [shard 43] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-cm-tremor/27_58/49-50-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,316 [shard 11] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka_internal/group/2_12/61-63-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,320 [shard 43] raft - [group_id:741, {kafka/rxdata-bidrequest/15}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {82}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,320 [shard 43] raft - [group_id:741, {kafka/rxdata-bidrequest/15}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {82}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,327 [shard 22] raft - [group_id:387, {kafka/rx-cm-post/22}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {56}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,328 [shard 22] raft - [group_id:387, {kafka/rx-cm-post/22}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {56}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,330 [shard 43] raft - [group_id:741, {kafka/rxdata-bidrequest/15}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {82}} - {term:{47}, target_node_id{id: {2}, revision: {82}}, vote_granted: 1, log_ok  :1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,330 [shard 43] raft - [group_id:741, {kafka/rxdata-bidrequest/15}] vote_stm.cc:264 - becoming the leader term:47
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,330 [shard 43] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-bidrequest/15_82/340345237-47-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,338 [shard 22] raft - [group_id:387, {kafka/rx-cm-post/22}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {56}} - {term:{66}, target_node_id{id: {2}, revision: {56}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,338 [shard 22] raft - [group_id:387, {kafka/rx-cm-post/22}] vote_stm.cc:264 - becoming the leader term:66
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,338 [shard 22] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-cm-post/22_56/61-66-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,344 [shard 34] raft - [group_id:300, {kafka/rx-aerokeys-ipdata/25}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {50}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,344 [shard 34] raft - [group_id:300, {kafka/rx-aerokeys-ipdata/25}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {50}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,352 [shard 32] raft - [group_id:190, {kafka/pixelverse_conv_eu/5}] vote_stm.cc:279 - became the leader term: 69
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,354 [shard 34] raft - [group_id:300, {kafka/rx-aerokeys-ipdata/25}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {50}} - {term:{60}, target_node_id{id: {2}, revision: {50}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,354 [shard 34] raft - [group_id:300, {kafka/rx-aerokeys-ipdata/25}] vote_stm.cc:264 - becoming the leader term:60
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,355 [shard 34] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-aerokeys-ipdata/25_50/59-60-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 29] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 61] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 16] rpc - server.cc:159 - Disconnected 10.16.67.51:50640 (applying protocol, Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 28] rpc - server.cc:159 - Disconnected 10.16.67.51:50524 (applying protocol, Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 59] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 50] rpc - server.cc:159 - Disconnected 10.16.67.51:50610 (applying protocol, Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 26] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,372 [shard 14] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,373 [shard 40] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:16,373 [shard 51] raft - [follower: {id: {4}, revision: {60}}] [group_id:434, {kafka/rx-confiant-request-ams/9}] - recovery_stm.cc:449 - recovery append entries error: rpc::errc::disconnected_endpoint(node d  own)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,373 [shard 51] raft - [group_id:434, {kafka/rx-confiant-request-ams/9}] consensus.cc:572 - Node {id: {4}, revision: {60}} recovery cancelled (rpc::errc::disconnected_endpoint(node down))
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,376 [shard 53] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-ams/14_60/1871909-61-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,380 [shard 42] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:16,381 [shard  0] cluster - metadata_dissemination_service.cc:486 - Error sending metadata update rpc::errc::disconnected_endpoint(node down) to 4
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,382 [shard 45] raft - [group_id:532, {kafka/rx-confiant-request-usw/17}] vote_stm.cc:279 - became the leader term: 58
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,402 [shard 27] raft - [group_id:716, {kafka/rx-usersync-us/21}] vote_stm.cc:279 - became the leader term: 51
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,415 [shard  4] raft - [group_id:144, {kafka/pixelverse_conv_ap/19}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {40}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,415 [shard  4] raft - [group_id:144, {kafka/pixelverse_conv_ap/19}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {40}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,419 [shard 43] raft - [group_id:741, {kafka/rxdata-bidrequest/15}] vote_stm.cc:279 - became the leader term: 47
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,419 [shard 43] raft - [group_id:422, {kafka/rx-cm-tremor/27}] vote_stm.cc:279 - became the leader term: 50
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,422 [shard 22] raft - [group_id:387, {kafka/rx-cm-post/22}] vote_stm.cc:279 - became the leader term: 66
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,423 [shard 15] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-impbeacon/3_94/361044250-54-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,450 [shard  4] raft - [group_id:144, {kafka/pixelverse_conv_ap/19}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {40}} - {term:{70}, target_node_id{id: {2}, revision: {40}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,450 [shard  4] raft - [group_id:144, {kafka/pixelverse_conv_ap/19}] vote_stm.cc:264 - becoming the leader term:70
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,452 [shard  4] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_ap/19_40/68-70-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,452 [shard 14] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.51, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407145, 1 in flight, time since: {init: 100 ms, en  queue: 100 ms, dispatch: 100 ms, written: 100 ms}, flushed: true
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,478 [shard 53] kafka - fetch_session_cache.cc:115 - no session with id 704643059 found
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,478 [shard 14] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 61] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 59] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 25] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 29] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 14] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 40] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 42] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,481 [shard 26] rpc - Disconnected from server {host: 10.16.67.51, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,497 [shard 32] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/pixelverse_conv_eu/5_44/67-68-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:68, base_offset:67, commit  ted_offset:67, dirty_offset:67}, compacted_segment=0, finished_self_compaction=0, generation={4}, reader={/var/lib/redpanda/data/kafka/pixelverse_conv_eu/5_44/67-68-v1.log, (330 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file  :/var/lib/redpanda/data/kafka/pixelverse_conv_eu/5_44/67-68-v1.base_index, offsets:{67}, index:{header_bitflags:0, base_offset:{67}, max_offset:{67}, base_timestamp:{timestamp: 1680360141356}, max_timestamp:{timestamp: 1680360141356}, batch_timestamps_are_monotonic:1, index(1,1,1)}, s  tep:32768, needs_persistence:0}})
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,498 [shard 22] raft - [group_id:279, {kafka/rx-aerokeys-ipdata/4}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {50}}, received 60, current 59
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,509 [shard 34] raft - [group_id:300, {kafka/rx-aerokeys-ipdata/25}] vote_stm.cc:279 - became the leader term: 60
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,554 [shard 63] raft - [group_id:241, {kafka/pixelverse_conv_us/26}] vote_stm.cc:279 - became the leader term: 67
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,562 [shard 10] raft - [group_id:473, {kafka/rx-confiant-request-ap/18}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {62}}, received 59, current 58
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,587 [shard  2] raft - [group_id:3, {kafka/__consumer_offsets/2}] consensus.cc:1625 - Received vote request with larger term from node {id: {1}, revision: {11}}, received 77, current 76
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,588 [shard  0] raft - [group_id:350, {kafka/rx-bidrequest-stream/15}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {54}}, received 56, current 55
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,608 [shard  1] raft - [group_id:1524, {kafka/summarystats-pubdealstat/18}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {134}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,608 [shard  1] raft - [group_id:1524, {kafka/summarystats-pubdealstat/18}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {134}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,618 [shard  1] raft - [group_id:1524, {kafka/summarystats-pubdealstat/18}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {134}} - {term:{53}, target_node_id{id: {2}, revision: {134}}, vote_granted:   1, log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,618 [shard  1] raft - [group_id:1524, {kafka/summarystats-pubdealstat/18}] vote_stm.cc:264 - becoming the leader term:53
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,619 [shard  1] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/summarystats-pubdealstat/18_134/4106-53-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,633 [shard 10] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-ap/18_62/1958601-59-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,648 [shard 22] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-aerokeys-ipdata/4_50/59-60-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,653 [shard  4] raft - [group_id:144, {kafka/pixelverse_conv_ap/19}] vote_stm.cc:279 - became the leader term: 70
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,653 [shard 18] raft - [group_id:379, {kafka/rx-cm-post/14}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {56}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,653 [shard 18] raft - [group_id:379, {kafka/rx-cm-post/14}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {56}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,655 [shard  0] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-bidrequest-stream/15_54/13216325062-56-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,664 [shard 18] raft - [group_id:379, {kafka/rx-cm-post/14}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {56}} - {term:{60}, target_node_id{id: {2}, revision: {56}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,664 [shard 18] raft - [group_id:379, {kafka/rx-cm-post/14}] vote_stm.cc:264 - becoming the leader term:60
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,665 [shard 18] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-cm-post/14_56/59-60-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,671 [shard  1] raft - [group_id:1524, {kafka/summarystats-pubdealstat/18}] vote_stm.cc:279 - became the leader term: 53
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,705 [shard 36] raft - [group_id:729, {kafka/rxdata-bidrequest/3}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {82}}, received 64, current 63
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,705 [shard 53] raft - [group_id:652, {kafka/rx-uid-map/17}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {74}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,705 [shard 53] raft - [group_id:652, {kafka/rx-uid-map/17}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {74}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,715 [shard 53] raft - [group_id:652, {kafka/rx-uid-map/17}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {74}} - {term:{53}, target_node_id{id: {2}, revision: {74}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,715 [shard 53] raft - [group_id:652, {kafka/rx-uid-map/17}] vote_stm.cc:264 - becoming the leader term:53
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,716 [shard 53] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-uid-map/17_74/10082779-53-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,722 [shard  2] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/__consumer_offsets/2_11/521362386-77-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,758 [shard 40] raft - [group_id:1056, {kafka/rxdata-lurl/0}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {104}}, received 57, current 56
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,793 [shard 40] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-lurl/0_104/216946816-57-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,798 [shard 36] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-bidrequest/3_82/340328878-64-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,798 [shard 53] raft - [group_id:652, {kafka/rx-uid-map/17}] vote_stm.cc:279 - became the leader term: 53
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,820 [shard 12] raft - [group_id:689, {kafka/rx-uid-map-batch/24}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {76}}, received 63, current 62
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,822 [shard 18] raft - [group_id:379, {kafka/rx-cm-post/14}] vote_stm.cc:279 - became the leader term: 60
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,836 [shard 56] raft - [group_id:1296, {kafka/rxdata-segmentusage/0}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {120}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,836 [shard 56] raft - [group_id:1296, {kafka/rxdata-segmentusage/0}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {120}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,838 [shard 51] raft - [group_id:117, {kafka/geouniques/22}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {38}}, received 68, current 67
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,847 [shard 56] raft - [group_id:1296, {kafka/rxdata-segmentusage/0}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {120}} - {term:{51}, target_node_id{id: {2}, revision: {120}}, vote_granted: 1, lo  g_ok:1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,847 [shard 56] raft - [group_id:1296, {kafka/rxdata-segmentusage/0}] vote_stm.cc:264 - becoming the leader term:51
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,848 [shard 56] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-segmentusage/0_120/32185778-51-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,865 [shard 37] raft - [group_id:198, {kafka/pixelverse_conv_eu/13}] consensus.cc:1625 - Received vote request with larger term from node {id: {1}, revision: {44}}, received 66, current 65
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,871 [shard 55] raft - [group_id:761, {kafka/rxdata-bidresponse/5}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {84}}, received 56, current 55
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,902 [shard 33] raft - [group_id:192, {kafka/pixelverse_conv_eu/7}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {44}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,902 [shard 33] raft - [group_id:192, {kafka/pixelverse_conv_eu/7}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {44}} with timeout 1500
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,912 [shard 33] raft - [group_id:192, {kafka/pixelverse_conv_eu/7}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {44}} - {term:{72}, target_node_id{id: {2}, revision: {44}}, vote_granted: 1, log_ok  :1}
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,912 [shard 33] raft - [group_id:192, {kafka/pixelverse_conv_eu/7}] vote_stm.cc:264 - becoming the leader term:72
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,913 [shard 33] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_eu/7_44/71-72-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,925 [shard 30] raft - [group_id:187, {kafka/pixelverse_conv_eu/2}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {44}}, received 57, current 56
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,948 [shard 37] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_eu/13_44/65-66-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,948 [shard 55] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-bidresponse/5_84/64134653-56-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,951 [shard 51] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/geouniques/22_38/127-68-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,951 [shard 12] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-uid-map-batch/24_76/23035971-63-v1.log
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,953 [shard 19] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.50, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407080, 1 in flight, time since: {init: 100 ms, en  queue: 100 ms, dispatch: 100 ms, written: 100 ms}, flushed: true
Apr 20 14:33:16 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:16,988 [shard 56] raft - [group_id:1296, {kafka/rxdata-segmentusage/0}] vote_stm.cc:279 - became the leader term: 51
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,017 [shard 18] raft - [group_id:487, {kafka/rx-confiant-request-iad/2}] consensus.cc:1625 - Received vote request with larger term from node {id: {1}, revision: {64}}, received 66, current 65
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,022 [shard 17] raft - [group_id:1019, {kafka/rxdata-impnurl/23}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {100}}, received 59, current 58
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,048 [shard 23] raft - [group_id:708, {kafka/rx-usersync-us/13}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {78}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,048 [shard 23] raft - [group_id:708, {kafka/rx-usersync-us/13}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {78}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,059 [shard 23] raft - [group_id:708, {kafka/rx-usersync-us/13}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {78}} - {term:{59}, target_node_id{id: {2}, revision: {78}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,059 [shard 23] raft - [group_id:708, {kafka/rx-usersync-us/13}] vote_stm.cc:264 - becoming the leader term:59
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,060 [shard 23] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-usersync-us/13_78/6118298719-59-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,060 [shard  5] raft - [group_id:358, {kafka/rx-bidrequest-stream/23}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {54}}, received 72, current 71
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,074 [shard 20] raft - [group_id:704, {kafka/rx-usersync-us/9}] consensus.cc:1625 - Received vote request with larger term from node {id: {1}, revision: {78}}, received 63, current 62
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,083 [shard 17] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-impnurl/23_100/41400906-59-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,094 [shard 30] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_eu/2_44/56-57-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,104 [shard 33] raft - [group_id:192, {kafka/pixelverse_conv_eu/7}] vote_stm.cc:279 - became the leader term: 72
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,105 [shard 32] raft - [group_id:403, {kafka/rx-cm-tremor/8}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {58}}, received 67, current 66
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,119 [shard 18] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-iad/2_64/14448344-66-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,121 [shard 23] raft - [group_id:708, {kafka/rx-usersync-us/13}] vote_stm.cc:279 - became the leader term: 59
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,142 [shard 29] raft - [group_id:826, {kafka/rxdata-click/10}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {88}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,142 [shard 29] raft - [group_id:826, {kafka/rxdata-click/10}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {88}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,153 [shard 29] raft - [group_id:826, {kafka/rxdata-click/10}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {88}} - {term:{57}, target_node_id{id: {2}, revision: {88}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,153 [shard 29] raft - [group_id:826, {kafka/rxdata-click/10}] vote_stm.cc:264 - becoming the leader term:57
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,154 [shard 29] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-click/10_88/740765-57-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,158 [shard 19] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.50, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407081, 1 in flight, time since: {init: 100 ms, en  queue: 100 ms, dispatch: 100 ms, written: 100 ms}, flushed: true
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,176 [shard 33] raft - [group_id:299, {kafka/rx-aerokeys-ipdata/24}] vote_stm.cc:52 - Sending vote request to {id: {1}, revision: {50}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,176 [shard 33] raft - [group_id:299, {kafka/rx-aerokeys-ipdata/24}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {50}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,186 [shard 33] raft - [group_id:299, {kafka/rx-aerokeys-ipdata/24}] vote_stm.cc:77 - vote reply from {id: {1}, revision: {50}} - {term:{71}, target_node_id{id: {2}, revision: {50}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,186 [shard 33] raft - [group_id:299, {kafka/rx-aerokeys-ipdata/24}] vote_stm.cc:264 - becoming the leader term:71
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,187 [shard 33] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-aerokeys-ipdata/24_50/69-71-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,195 [shard 63] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/pixelverse_conv_us/26_46/65-66-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:66, base_offset:65, commi  tted_offset:65, dirty_offset:65}, compacted_segment=0, finished_self_compaction=0, generation={4}, reader={/var/lib/redpanda/data/kafka/pixelverse_conv_us/26_46/65-66-v1.log, (330 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={fi  le:/var/lib/redpanda/data/kafka/pixelverse_conv_us/26_46/65-66-v1.base_index, offsets:{65}, index:{header_bitflags:0, base_offset:{65}, max_offset:{65}, base_timestamp:{timestamp: 1680360140510}, max_timestamp:{timestamp: 1680360140510}, batch_timestamps_are_monotonic:1, index(1,1,1)}  , step:32768, needs_persistence:0}})
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,204 [shard  5] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-bidrequest-stream/23_54/13216409466-72-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,233 [shard 32] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-cm-tremor/8_58/65-67-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,263 [shard  6] raft - [group_id:10, {kafka/__consumer_offsets/9}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {11}}, received 75, current 74
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,267 [shard 25] raft - [group_id:1139, {kafka/rxdata-outstream_pread_event/23}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {108}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,268 [shard 25] raft - [group_id:1139, {kafka/rxdata-outstream_pread_event/23}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {108}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,278 [shard 25] raft - [group_id:1139, {kafka/rxdata-outstream_pread_event/23}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {108}} - {term:{49}, target_node_id{id: {2}, revision: {108}}, vote_gran  ted: 1, log_ok:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,278 [shard 25] raft - [group_id:1139, {kafka/rxdata-outstream_pread_event/23}] vote_stm.cc:264 - becoming the leader term:49
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,279 [shard 25] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-outstream_pread_event/23_108/344-49-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,346 [shard 17] raft - [group_id:29, {kafka_internal/group/12}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {12}}, received 55, current 54
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,347 [shard  6] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/__consumer_offsets/9_11/66539563-75-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,359 [shard 19] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.50, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407082, 1 in flight, time since: {init: 100 ms, en  queue: 100 ms, dispatch: 100 ms, written: 100 ms}, flushed: true
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,388 [shard 31] raft - [group_id:83, {kafka/dmp-cookie-sync/18}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {36}}, received 67, current 66
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,431 [shard 32] raft - [group_id:86, {kafka/dmp-cookie-sync/21}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {36}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,431 [shard 32] raft - [group_id:86, {kafka/dmp-cookie-sync/21}] vote_stm.cc:52 - Sending vote request to {id: {4}, revision: {36}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,433 [shard 17] raft - [group_id:378, {kafka/rx-cm-post/13}] vote_stm.cc:52 - Sending vote request to {id: {4}, revision: {56}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,433 [shard 17] raft - [group_id:378, {kafka/rx-cm-post/13}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {56}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,433 [shard 33] raft - [group_id:619, {kafka/rx-fc-usw/14}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {72}}, received 63, current 62
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,441 [shard 32] raft - [group_id:86, {kafka/dmp-cookie-sync/21}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {36}} - {term:{65}, target_node_id{id: {2}, revision: {36}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,441 [shard 32] raft - [group_id:86, {kafka/dmp-cookie-sync/21}] vote_stm.cc:264 - becoming the leader term:65
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,442 [shard 32] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/dmp-cookie-sync/21_36/5470737-65-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,443 [shard 17] raft - [group_id:378, {kafka/rx-cm-post/13}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {56}} - {term:{58}, target_node_id{id: {2}, revision: {56}}, vote_granted: 1, log_ok:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,443 [shard 17] raft - [group_id:378, {kafka/rx-cm-post/13}] vote_stm.cc:264 - becoming the leader term:58
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,443 [shard 17] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-cm-post/13_56/57-58-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,480 [shard 13] raft - [group_id:1224, {kafka/rxdata-rg_application/18}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {114}}, received 56, current 55
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,495 [shard 36] raft - [group_id:197, {kafka/pixelverse_conv_eu/12}] vote_stm.cc:52 - Sending vote request to {id: {4}, revision: {44}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,495 [shard 36] raft - [group_id:197, {kafka/pixelverse_conv_eu/12}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {44}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,496 [shard 21] raft - [group_id:1131, {kafka/rxdata-outstream_pread_event/15}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 45, dirty offset 346
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,500 [shard 54] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,503 [shard 60] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,505 [shard 36] raft - [group_id:197, {kafka/pixelverse_conv_eu/12}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {44}} - {term:{77}, target_node_id{id: {2}, revision: {44}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,505 [shard 36] raft - [group_id:197, {kafka/pixelverse_conv_eu/12}] vote_stm.cc:264 - becoming the leader term:77
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,505 [shard 36] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_eu/12_44/75-77-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,506 [shard 62] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,516 [shard 34] raft - [group_id:1580, {kafka/summarystats-udbustat/14}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 46, dirty offset 3148
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,522 [shard 17] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka_internal/group/12_12/54-55-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,527 [shard 53] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,532 [shard  0] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,532 [shard 15] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,532 [shard 52] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,532 [shard 10] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 16] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 33] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 44] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 59] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 42] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 46] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 20] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 22] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard  4] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,533 [shard 27] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,534 [shard 24] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,535 [shard 33] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-fc-usw/14_72/1992-63-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,536 [shard  2] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,537 [shard 63] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,537 [shard 43] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,537 [shard 36] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,537 [shard 51] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,538 [shard 31] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/dmp-cookie-sync/18_36/5473268-67-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,538 [shard 13] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rxdata-rg_application/18_114/54-56-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,539 [shard 49] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,539 [shard 40] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,540 [shard 12] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,543 [shard 47] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,544 [shard  3] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,545 [shard 61] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,546 [shard 32] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,546 [shard 29] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,546 [shard 41] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,546 [shard 11] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,547 [shard 43] raft - [group_id:529, {kafka/rx-confiant-request-usw/14}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {66}}, received 55, current 54
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,548 [shard  7] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,548 [shard 39] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,548 [shard 17] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,549 [shard  6] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,550 [shard 52] raft - [group_id:970, {kafka/rxdata-impexpired/4}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 48, dirty offset 3694733
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,550 [shard 21] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,550 [shard 23] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,550 [shard 34] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,551 [shard 19] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,551 [shard  1] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,551 [shard 58] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,551 [shard 18] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,555 [shard 31] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,556 [shard 25] raft - [group_id:1139, {kafka/rxdata-outstream_pread_event/23}] vote_stm.cc:279 - became the leader term: 49
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,558 [shard 36] raft - [group_id:197, {kafka/pixelverse_conv_eu/12}] vote_stm.cc:279 - became the leader term: 77
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,558 [shard 55] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,559 [shard 19] rpc - transport.cc:210 - RPC timeout (100 ms) to {host: 10.16.67.50, port: 33145}, method: node_status_rpc::node_status, correlation id: 16407083, 1 in flight, time since: {init: 100 ms, en  queue: 99 ms, dispatch: 99 ms, written: 99 ms}, flushed: true
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,559 [shard 28] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,561 [shard 48] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,561 [shard 38] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,562 [shard 45] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,567 [shard 56] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,568 [shard 32] raft - [group_id:86, {kafka/dmp-cookie-sync/21}] vote_stm.cc:279 - became the leader term: 65
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,569 [shard 17] raft - [group_id:378, {kafka/rx-cm-post/13}] vote_stm.cc:279 - became the leader term: 58
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,570 [shard 19] raft - [group_id:1236, {kafka/rxdata-rg_error/0}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 54, dirty offset 52
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,574 [shard 37] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,574 [shard  5] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,576 [shard 14] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,579 [shard  8] raft - [group_id:151, {kafka/pixelverse_conv_ap/26}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {40}}, received 67, current 66
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,584 [shard 35] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,584 [shard 30] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,590 [shard 26] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,605 [shard 57] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,618 [shard 43] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-confiant-request-usw/14_66/6037714-55-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,620 [shard 28] raft - [group_id:290, {kafka/rx-aerokeys-ipdata/15}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {50}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,620 [shard 28] raft - [group_id:290, {kafka/rx-aerokeys-ipdata/15}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {50}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,627 [shard 14] raft - [group_id:694, {kafka/rx-uid-map-batch/29}] consensus.cc:1625 - Received vote request with larger term from node {id: {3}, revision: {76}}, received 53, current 52
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,631 [shard 28] raft - [group_id:290, {kafka/rx-aerokeys-ipdata/15}] vote_stm.cc:77 - vote reply from {id: {3}, revision: {50}} - {term:{53}, target_node_id{id: {2}, revision: {50}}, vote_granted: 1, log_o  k:1}
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,631 [shard 28] raft - [group_id:290, {kafka/rx-aerokeys-ipdata/15}] vote_stm.cc:264 - becoming the leader term:53
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,632 [shard 28] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/rx-aerokeys-ipdata/15_50/48-53-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,650 [shard 11] raft - [group_id:1222, {kafka/rxdata-rg_application/16}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 48, dirty offset 47
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,651 [shard 54] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,654 [shard 60] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,656 [shard 33] raft - [group_id:1152, {kafka/rxdata-providerdmpevent/6}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 51, dirty offset 50
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,657 [shard 62] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,666 [shard 30] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/rx-cm-tremor/5_58/60-61-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:61, base_offset:60, committed_of  fset:60, dirty_offset:60}, compacted_segment=0, finished_self_compaction=0, generation={4}, reader={/var/lib/redpanda/data/kafka/rx-cm-tremor/5_58/60-61-v1.log, (330 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/re  dpanda/data/kafka/rx-cm-tremor/5_58/60-61-v1.base_index, offsets:{60}, index:{header_bitflags:0, base_offset:{60}, max_offset:{60}, base_timestamp:{timestamp: 1680360141041}, max_timestamp:{timestamp: 1680360141041}, batch_timestamps_are_monotonic:1, index(1,1,1)}, step:32768, needs_p  ersistence:0}})
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,666 [shard 30] storage - disk_log_impl.cc:1232 - Removing "/var/lib/redpanda/data/kafka/pixelverse_conv_eu/2_44/55-56-v1.log" (remove_prefix_full_segments, {offset_tracker:{term:56, base_offset:55, commit  ted_offset:55, dirty_offset:55}, compacted_segment=0, finished_self_compaction=0, generation={4}, reader={/var/lib/redpanda/data/kafka/pixelverse_conv_eu/2_44/55-56-v1.log, (330 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file  :/var/lib/redpanda/data/kafka/pixelverse_conv_eu/2_44/55-56-v1.base_index, offsets:{55}, index:{header_bitflags:0, base_offset:{55}, max_offset:{55}, base_timestamp:{timestamp: 1680360140747}, max_timestamp:{timestamp: 1680360140747}, batch_timestamps_are_monotonic:1, index(1,1,1)}, s  tep:32768, needs_persistence:0}})
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,678 [shard 57] raft - [group_id:658, {kafka/rx-uid-map/23}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 52, dirty offset 10084077
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,678 [shard 53] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,683 [shard  0] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,683 [shard 33] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,683 [shard 42] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 44] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 52] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 27] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard  4] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 20] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 46] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 22] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 10] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 13] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,684 [shard 16] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,685 [shard 24] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,687 [shard 63] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,687 [shard 43] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,688 [shard 36] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,689 [shard 49] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,689 [shard 40] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,694 [shard 47] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,694 [shard  3] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,695 [shard 12] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,695 [shard 51] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 63] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 61] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 29] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 32] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard  6] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 19] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 41] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 59] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,696 [shard 14] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,697 [shard 11] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,697 [shard 57] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,697 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::disconnected_endpoint(node down)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,697 [shard  0] cluster - (rate limiting dropped 3547 similar messages) error refreshing cluster health state - rpc::errc::disconnected_endpoint(node down)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,697 [shard  0] cluster - health_monitor_backend.cc:432 - Failed to refresh cluster health.
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,697 [shard 46] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,698 [shard  8] storage - segment.cc:663 - Creating new segment /var/lib/redpanda/data/kafka/pixelverse_conv_ap/26_40/65-67-v1.log
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,698 [shard  7] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,698 [shard 62] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,698 [shard 19] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 39] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 17] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 63] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 19] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard  6] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 59] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 14] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 57] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 46] rpc - Disconnected from server {host: 10.16.67.50, port: 33145}: std::__1::system_error (error system:104, read: Connection reset by peer)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::disconnected_endpoint(node down)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::disconnected_endpoint(node down)
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,699 [shard 54] raft - [group_id:760, {kafka/rxdata-bidresponse/4}] consensus.cc:200 - [heartbeats_majority] Stepping down as leader in term 57, dirty offset 64141899
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,699 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,700 [shard 21] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,700 [shard 23] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,700 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,701 [shard 34] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,701 [shard  6] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,701 [shard 19] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,701 [shard  1] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,701 [shard 58] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,701 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,702 [shard 18] r/heartbeat - heartbeat_manager.cc:209 - Closed unresponsive connection to 0
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,702 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,703 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,704 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,705 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,706 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,707 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,708 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,709 [shard 21] raft - [group_id:812, {kafka/rxdata-bidresponse_extra/26}] vote_stm.cc:52 - Sending vote request to {id: {3}, revision: {86}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: INFO  2023-04-20 14:33:17,709 [shard 21] raft - [group_id:812, {kafka/rxdata-bidresponse_extra/26}] vote_stm.cc:52 - Sending vote request to {id: {0}, revision: {86}} with timeout 1500
Apr 20 14:33:17 srv-01-04-302.iad1.trmr.io rpk[61758]: WARN  2023-04-20 14:33:17,709 [shard  0] cluster - health_monitor_backend.cc:345 - unable to get cluster health metadata from 1 - rpc::errc::exponential_backoff
Apr 20 14:33:19 srv-01-04-302.iad1.trmr.io systemd[1]: redpanda.service: main process exited, code=killed, status=6/ABRT
Apr 20 14:33:19 srv-01-04-302.iad1.trmr.io systemd[1]: Unit redpanda.service entered failed state.
Apr 20 14:33:19 srv-01-04-302.iad1.trmr.io systemd[1]: redpanda.service failed.
Apr 20 14:33:20 srv-01-04-302.iad1.trmr.io systemd[1]: redpanda.service holdoff time over, scheduling restart.
Apr 20 14:33:20 srv-01-04-302.iad1.trmr.io systemd[1]: Stopped Redpanda, the fastest queue in the West..
Apr 20 14:33:20 srv-01-04-302.iad1.trmr.io systemd[1]: Starting Redpanda, the fastest queue in the West....
Apr 20 14:33:23 srv-01-04-302.iad1.trmr.io rpk[59125]: INFO  2023-04-20 14:33:23,351 [shard 62] cluster - controller_backend.cc:500 - [{kafka/pixelverse_conv_eu/10}] Bootstrapping deltas: first - {type: addition, ntp: {kafka/pixelverse_conv_eu/10}, offset: 44, new_assignment: { id: 10  , group_id: 195, replicas: {{node_id: 1, shard: 35}, {node_id: 3, shard: 35}, {node_id: 0, shard: 35}} }, previous_replica_set: {nullopt}}, last - {delta: {type: addition, ntp: {kafka/pixelverse_conv_eu/10}, offset: 44, new_assignment: { id: 10, group_id: 195, replicas: {{node_id: 1,   shard: 35}, {node_id: 3, shard: 35}, {node_id: 0, shard: 35}} }, previous_replica_set: {nullopt}}, retries: 0}
Apr 20 14:33:23 srv-01-04-302.iad1.trmr.io rpk[59125]: INFO  2023-04-20 14:33:23,351 [shard 22] cluster - controller_backend.cc:500 - [{kafka/rx-aerokeys-devicedata/1}] Bootstrapping deltas: first - {type: addition, ntp: {kafka/rx-aerokeys-devicedata/1}, offset: 48, new_assignment: {   id: 1, group_id: 246, replicas: {{node_id: 2, shard: 2}, {node_id: 1, shard: 1}, {node_id: 4, shard: 2}} }, previous_replica_set: {nullopt}}, last - {delta: {type: addition, ntp: {kafka/rx-aerokeys-devicedata/1}, offset: 48, new_assignment: { id: 1, group_id: 246, replicas: {{node_id:   2, shard: 2}, {node_id: 1, shard: 1}, {node_id: 4, shard: 2}} }, previous_replica_set: {nullopt}}, retries: 0}
kargh commented 1 year ago

This happened during an application reload when we would be recreating about 25k new producer connections per node.

BenPope commented 1 year ago

The 3 backtraces are the same:

seastar::memory::on_allocation_failure(unsigned long) at /v/build/v_deps_build/seastar-prefix/src/seastar/src/core/memory.cc:1821
 (inlined by) seastar::memory::allocate(unsigned long) at /v/build/v_deps_build/seastar-prefix/src/seastar/src/core/memory.cc:1410
operator new(unsigned long) at /v/build/v_deps_build/seastar-prefix/src/seastar/src/core/memory.cc:2079
void* std::__1::__libcpp_operator_new<unsigned long>(unsigned long) at /vectorized/llvm/bin/../include/c++/v1/new:245
 (inlined by) std::__1::__libcpp_allocate(unsigned long, unsigned long) at /vectorized/llvm/bin/../include/c++/v1/new:271
 (inlined by) std::__1::allocator<absl::lts_20220623::container_internal::AlignedType<8ul> >::allocate(unsigned long) at /vectorized/llvm/bin/../include/c++/v1/__memory/allocator.h:105
 (inlined by) util::tracking_allocator<absl::lts_20220623::container_internal::AlignedType<8ul>, std::__1::allocator<absl::lts_20220623::container_internal::AlignedType<8ul> > >::allocate(unsigned long) at /var/lib/buildkite-agent/builds/buildkite-amd64-builders-i-00b5d93b2049284be-1/redpanda/redpanda/src/v/utils/tracking_allocator.h:121
 (inlined by) std::__1::allocator_traits<util::tracking_allocator<absl::lts_20220623::container_internal::AlignedType<8ul>, std::__1::allocator<absl::lts_20220623::container_internal::AlignedType<8ul> > > >::allocate(util::tracking_allocator<absl::lts_20220623::container_internal::AlignedType<8ul>, std::__1::allocator<absl::lts_20220623::container_internal::AlignedType<8ul> > >&, unsigned long) at /vectorized/llvm/bin/../include/c++/v1/__memory/allocator_traits.h:262
 (inlined by) void* absl::lts_20220623::container_internal::Allocate<8ul, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >(util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > >*, unsigned long) at /vectorized/include/absl/container/internal/container_memory.h:64
 (inlined by) absl::lts_20220623::container_internal::raw_hash_set<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::initialize_slots() at /vectorized/include/absl/container/internal/raw_hash_set.h:1937
 (inlined by) absl::lts_20220623::container_internal::raw_hash_set<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::resize(unsigned long) at /vectorized/include/absl/container/internal/raw_hash_set.h:1978
 (inlined by) absl::lts_20220623::container_internal::raw_hash_set<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::rehash_and_grow_if_necessary() at /vectorized/include/absl/container/internal/raw_hash_set.h:2126
absl::lts_20220623::container_internal::raw_hash_set<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::prepare_insert(unsigned long) at /vectorized/include/absl/container/internal/raw_hash_set.h:2191
std::__1::pair<unsigned long, bool> absl::lts_20220623::container_internal::raw_hash_set<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::find_or_prepare_insert<model::producer_identity>(model::producer_identity const&) at /vectorized/include/absl/container/internal/raw_hash_set.h:2180
 (inlined by) std::__1::pair<absl::lts_20220623::container_internal::raw_hash_set<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::iterator, bool> absl::lts_20220623::container_internal::raw_hash_map<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::try_emplace_impl<model::producer_identity const&, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >(model::producer_identity const&, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> >&&) at /vectorized/include/absl/container/internal/raw_hash_map.h:185
 (inlined by) std::__1::pair<absl::lts_20220623::container_internal::raw_hash_set<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::iterator, bool> absl::lts_20220623::container_internal::raw_hash_map<absl::lts_20220623::container_internal::FlatHashMapPolicy<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, absl::lts_20220623::hash_internal::Hash<model::producer_identity>, std::__1::equal_to<model::producer_identity>, util::tracking_allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > >, std::__1::allocator<std::__1::pair<model::producer_identity const, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> > > > > >::try_emplace<model::producer_identity, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> >, 0>(model::producer_identity const&, seastar::lw_shared_ptr<seastar::basic_rwlock<std::__1::chrono::steady_clock> >&&) at /vectorized/include/absl/container/internal/raw_hash_map.h:129
 (inlined by) cluster::rm_stm::get_idempotent_producer_lock(model::producer_identity) at /var/lib/buildkite-agent/builds/buildkite-amd64-builders-i-00b5d93b2049284be-1/redpanda/redpanda/src/v/cluster/rm_stm.h:726
 (inlined by) cluster::rm_stm::replicate_seq(model::batch_identity, model::record_batch_reader, raft::replicate_options, seastar::lw_shared_ptr<available_promise<> >) at /var/lib/buildkite-agent/builds/buildkite-amd64-builders-i-00b5d93b2049284be-1/redpanda/redpanda/src/v/cluster/rm_stm.cc:1532
operator() at /var/lib/buildkite-agent/builds/buildkite-amd64-builders-i-00b5d93b2049284be-1/redpanda/redpanda/src/v/cluster/rm_stm.cc:1114

The code is: https://github.com/redpanda-data/redpanda/blob/v23.1.4/src/v/cluster/rm_stm.h#L724-L730

    ss::lw_shared_ptr<ss::basic_rwlock<>>
    get_idempotent_producer_lock(model::producer_identity pid) {
        auto [it, _] = _idempotent_producer_locks.try_emplace(
          pid, ss::make_lw_shared<ss::basic_rwlock<>>());

        return it->second;
    }

_idempotent_producer_locks is:

    mt::unordered_map_t<
      absl::flat_hash_map,
      model::producer_identity,
      ss::lw_shared_ptr<ss::basic_rwlock<>>>
      _idempotent_producer_locks;

One potential fix would be to change it to a node based hash_map such as absl::node_hash_map.

emaxerrno commented 1 year ago

this makes total sense. @BenPope wonder if other flat_* containers are at this risk on the idempotency code paths. seen a few backtraces from that code tree.

dotnwat commented 1 year ago

One potential fix would be to change it to a node based hash_map such as absl::node_hash_map.

looks like we might want a btree_map? at 6mb allocation, even reducing the per entry cost, this container needs to hold a large number of elements. we'd hit the same problem at higher scale.

piyushredpanda commented 1 year ago

We don't have a label for idempotency, so going with transactions (as the code paths are interleaved anyway).

dotnwat commented 1 year ago

We don't have a label for idempotency, so going with transactions (as the code paths are interleaved anyway).

makes sense. also looks like it's a fungible ticket anyone can work on

rystsov commented 1 year ago

there is max_concurrent_producer_ids config; if you estimate an upper bound on the number of producers (per partition) and set it - it may help to avoid OOMs

how to set? rpk cluster config set max_concurrent_producer_ids 30000 how to check it's set? rpk cluster config get max_concurrent_producer_ids

kargh commented 1 year ago

there is max_concurrent_producer_ids config; if you estimate an upper bound on the number of producers (per partition) and set it - it may help to avoid OOMs

I've tried this in the past. The only way that we could get it to work for us is to set the value extremely high otherwise when we would do an app reload, the new producer sessions would fail. I suspect it is not clearing them out as fast as the app is trying to make them. In order to avoid disruptions during these reloads (which happen once or twice a week typically when new releases are pushed) I had to set the value to 100000.

rystsov commented 1 year ago

I suspect it is not clearing them out as fast as the app is trying to make them

Redpanda doesn't affect new connections when the number of concurrent producers passes the threshold but starts to terminate old sessions. So if you observed the impact, say at max_concurrent_producer_ids set to 50000, it means that you have more than 50000 active producers interacting with the same partition.

I've tried this in the past ... I had to set the value to 100000

Did you have it set to 100k when you observed recent OOMs?

kargh commented 1 year ago

I suspect it is not clearing them out as fast as the app is trying to make them

Redpanda doesn't affect new connections when the number of concurrent producers passes the threshold but starts to terminate old sessions. So if you observed the impact, say at max_concurrent_producer_ids set to 50000, it means that you have more than 50000 active producers interacting with the same partition.

I've tried this in the past ... I had to set the value to 100000

Did you have it set to 100k when you observed recent OOMs?

We have about 25-28k producers per node. Originally I set the max to 30k and reloaded the app. A reload will kill the current connections and create new ones. That failed spectacularly. I upped it to 50k and tried again with the same results. So, I changed it to 100k and the issue went away.

To answer you question, I've had it set to 100k since the config option was added many months ago.