Closed NyaliaLui closed 1 year ago
the grafana logs before the segfault:
2022-01-16T15:59:51.280041202Z stderr F INFO 2022-01-16 15:59:51,279 [shard 0] kafka - fetch.cc:346 - Fetch requested very large response (220200960), clamping each partition's max_bytes to 3195660 bytes
2022-01-16T15:59:51.28004731Z stderr F INFO 2022-01-16 15:59:51,279 [shard 11] kafka - fetch.cc:346 - Fetch requested very large response (230686720), clamping each partition's max_bytes to 3050402 bytes
2022-01-16T15:59:51.280051187Z stderr F INFO 2022-01-16 15:59:51,279 [shard 10] kafka - fetch.cc:346 - Fetch requested very large response (230686720), clamping each partition's max_bytes to 3050402 bytes
2022-01-16T15:59:51.280055828Z stderr F INFO 2022-01-16 15:59:51,279 [shard 12] kafka - fetch.cc:346 - Fetch requested very large response (220200960), clamping each partition's max_bytes to 3195660 bytes
2022-01-16T15:59:51.280059719Z stderr F INFO 2022-01-16 15:59:51,279 [shard 13] kafka - fetch.cc:346 - Fetch requested very large response (220200960), clamping each partition's max_bytes to 3195660 bytes
2022-01-16T15:59:51.280063513Z stderr F INFO 2022-01-16 15:59:51,279 [shard 15] kafka - fetch.cc:346 - Fetch requested very large response (220200960), clamping each partition's max_bytes to 3195660 bytes
2022-01-16T15:59:51.280067244Z stderr F INFO 2022-01-16 15:59:51,279 [shard 9] kafka - fetch.cc:346 - Fetch requested very large response (230686720), clamping each partition's max_bytes to 3050402 bytes
2022-01-16T15:59:51.280078468Z stderr F INFO 2022-01-16 15:59:51,279 [shard 14] kafka - fetch.cc:346 - Fetch requested very large response (220200960), clamping each partition's max_bytes to 3195660 bytes
2022-01-16T15:59:51.280155065Z stderr F INFO 2022-01-16 15:59:51,280 [shard 3] kafka - fetch.cc:346 - Fetch requested very large response (230686720), clamping each partition's max_bytes to 3050402 bytes
2022-01-16T15:59:51.280373543Z stderr F INFO 2022-01-16 15:59:51,280 [shard 1] kafka - fetch.cc:346 - Fetch requested very large response (230686720), clamping each partition's max_bytes to 3050402 bytes
Brokers were configured with the new storage_read_buffer_size
and storage_read_readahead_count
configs:
Do kubectl edit cluster -n <namespace> <cluster name>
to get
spec:
additionalConfiguration:
redpanda.default_topic_replications: "3"
redpanda.id_allocator_replication: "3"
redpanda.storage_read_buffer_size: "32768"
redpanda.storage_read_readahead_count: "2"
The topic:
SUMMARY
=======
NAME test-2k-e
PARTITIONS 2048
REPLICAS 3
CONFIGS
=======
KEY VALUE SOURCE
cleanup.policy delete DYNAMIC_TOPIC_CONFIG
compression.type producer DEFAULT_CONFIG
message.timestamp.type CreateTime DEFAULT_CONFIG
partition_count 2048 DYNAMIC_TOPIC_CONFIG
redpanda.datapolicy function_name: script_name: DEFAULT_CONFIG
redpanda.remote.read true DYNAMIC_TOPIC_CONFIG
redpanda.remote.write true DYNAMIC_TOPIC_CONFIG
replication_factor 3 DYNAMIC_TOPIC_CONFIG
retention.bytes 2147483648 DYNAMIC_TOPIC_CONFIG
retention.ms 604800000 DEFAULT_CONFIG
segment.bytes 1073741824 DYNAMIC_TOPIC_CONFIG
This is the constructor for an append_challenged_posix_file_impl::op
failing, presumably on failure to allocate.
Closing individual bad_alloc tickets: our mid-term position is probably going to be to assert out on bad alloc.
Version & Environment
Redpanda version: v21.11.3-si-beta8
The following backtrace was seen on BYOC during long running tests for shadow indexing.