redpanda-data / redpanda

Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!
https://redpanda.com
9.37k stars 576 forks source link

redpanda: Reader cannot read before start of the log #2140

Open cscorley opened 3 years ago

cscorley commented 3 years ago

Hi team,

Ran into an odd issue while experimenting with the retention.ms topic configuration. The issue is setting an absurdly low retention time (10sec) and restarting redpanda seems to create a state where rpk topic consume does not work. For this, I am using docker compose. I don't think that's necessary to reproduce, but I have not tried it any other way :)

I don't plan to use a retention.ms this low, but I was trying to find out what happens when the topic has one without having to wait for my desired retention.ms time (1 day).

I don't believe this is the same as issue #340. Whenever I encounter this, the error is always Reader cannot read before start of the log n-1 < n. This issue also impacts other clients I am trying to use, and not just rpk topic consume.

Description and reproduction steps follow.

docker-compose.yml

version: "3.4"
services:
  redpanda:
    image: vectorized/redpanda:v21.7.6
    ports:
      - 9092:9092
      - 9644:9644
    command:
      "redpanda start --overprovisioned --smp 1 --memory 1G --reserve-memory 0M
      --node-id 0 --check=false"
    volumes:
      - redpanda-data-test:/var/lib/redpanda/data

volumes:
  redpanda-data-test:

bug.sh

#!/bin/bash

docker compose pull
docker compose up -d

docker compose exec redpanda rpk topic delete test
docker compose exec redpanda rpk topic create test

docker compose exec redpanda rpk topic set-config test retention.ms=10000

docker compose exec redpanda rpk topic describe test

# produce two messages
docker compose exec redpanda rpk topic produce test
docker compose exec redpanda rpk topic produce test

# force down
docker compose down

# restart
docker compose up -d

# sleep long enough for retention
sleep 60

# this should fail with
# Got an error consuming topic 'test', partition 0: EOF
docker compose exec redpanda rpk topic consume test

Output from running above script (including my inputs to produce):

[+] Running 1/1
 ⠿ redpanda Pulled                                                                                                                                                                                           0.8s
[+] Running 1/1
 ⠿ Container test_redpanda_1  Started                                                                                                                                                                        1.0s
Deleted topic 'test'.
Created topic 'test'.
You may check its config with

rpk topic describe 'test'

Added configs retention.ms=10000 to topic 'test'.
  Name                test
  Internal            false
  Cleanup policy      delete
  Config:
  Name                Value   Read-only  Sensitive
  partition_count     1       false      false
  replication_factor  1       false      false
  retention.ms        10000   false      false
  Partitions          1 - 1 out of 1
  Partition           Leader          Replicas   In-Sync Replicas  High Watermark
  0                   0               [0]        [0]               0
Reading message... Press CTRL + D to send, CTRL + C to cancel.
hiSent record to partition 0 at offset 0 with timestamp 2021-08-23 21:28:05.207160945 +0000 UTC m=+7.322933294.
Reading message... Press CTRL + D to send, CTRL + C to cancel.
byeSent record to partition 0 at offset 1 with timestamp 2021-08-23 21:28:09.681826053 +0000 UTC m=+4.113673514.
[+] Running 2/2
 ⠿ Container test_redpanda_1  Removed                                                                                                                                                                        0.3s
 ⠿ Network test_default       Removed                                                                                                                                                                        0.2s
[+] Running 2/2
 ⠿ Network test_default       Created                                                                                                                                                                        0.3s
 ⠿ Container test_redpanda_1  Started                                                                                                                                                                        0.7s
Got an error consuming topic 'test', partition 0: EOF
Got an error consuming topic 'test', partition 0: EOF
Got an error consuming topic 'test', partition 0: EOF
Got an error consuming topic 'test', partition 0: EOF

I have also tried specifying --offset. --offset oldest continues to fail with the above EOF message, while --offset newest does not (and consumes new messages produced).

> docker compose exec redpanda rpk topic consume test
Got an error consuming topic 'test', partition 0: EOF
^C%

> docker compose exec redpanda rpk topic consume test --offset oldest
Got an error consuming topic 'test', partition 0: EOF
^C%

> docker compose exec redpanda rpk topic consume test --offset newest
{
 "message": "third",
 "partition": 0,
 "offset": 2,
 "timestamp": "2021-08-23T21:37:26.777Z"
}
^C%

> docker compose exec redpanda rpk topic consume test --offset oldest
Got an error consuming topic 'test', partition 0: EOF
^C%

On occasion, the error will be this at first (I have not been able to find repro for it):

Got an error consuming topic 'test', partition 0: kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.

docker compose logs

redpanda_1  | + /usr/local/bin/supervisord -d
redpanda_1  | + exec /usr/bin/rpk redpanda start --overprovisioned --smp 1 --memory 1G --reserve-memory 0M --node-id 0 --check=false
redpanda_1  | We'd love to hear about your experience with redpanda:
redpanda_1  | https://vectorized.io/feedback
redpanda_1  | Starting redpanda...
redpanda_1  | Running:
redpanda_1  | HOSTNAME=f4961c0e9fdb PWD=/ HOME=/var/lib/redpanda SHLVL=0 PATH=/opt/redpanda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin /opt/redpanda/bin/redpanda redpanda --redpanda-cfg /etc/redpanda/redpanda.yaml --memory=1G --lock-memory=false --reserve-memory=0M --smp=1 --overprovisioned
redpanda_1  | 
redpanda_1  | 
redpanda_1  | Welcome to the Redpanda community!
redpanda_1  | 
redpanda_1  | Slack: https://vectorized.io/slack - is the main way the community interacts with one another in real time :)
redpanda_1  | Twitter: https://twitter.com/vectorizedio - come say hi!
redpanda_1  | Github Discussion: https://github.com/vectorizedio/redpanda/discussions - is preferred for longer, async, thoughtful discussions
redpanda_1  | GitHub Issues: https://github.com/vectorizedio/redpanda/issues - is reserved only for actual issues. Please use the GitHub for discussions.
redpanda_1  | Documentation: https://vectorized.io/docs/ - official docs site
redpanda_1  | Support: https://support.vectorized.io/ - to share private information with the production support vectorized team
redpanda_1  | Product Feedback: https://vectorized.io/feedback/ - let us know how we can improve your experience
redpanda_1  | 
redpanda_1  | 
redpanda_1  | INFO   redpanda::main - application.cc:120 - Redpanda v21.7.6 - f93eed1c6888466993107a558b2abde318151720
redpanda_1  | INFO   redpanda::main - application.cc:128 - kernel=5.10.25-linuxkit, nodename=f4961c0e9fdb, machine=x86_64
redpanda_1  | WARN  2021-08-23 21:28:14,032 [shard 0] seastar - Unable to set SCHED_FIFO scheduling policy for timer thread; latency impact possible. Try adding CAP_SYS_NICE
redpanda_1  | INFO  2021-08-23 21:28:14,032 [shard 0] redpanda::main - application.cc:115 - System resources: { cpus: 1, available memory: 1024.000MiB, reserved memory: 0.000bytes}
redpanda_1  | INFO  2021-08-23 21:28:14,033 [shard 0] redpanda::main - application.cc:279 - Configuration:
redpanda_1  | 
redpanda_1  | config_file: /etc/redpanda/redpanda.yaml
redpanda_1  | node_uuid: NfecqYHcDx675f73EScuaFotqQhoHeA6ANsJAdbKpjgSgo353
redpanda_1  | pandaproxy: {}
redpanda_1  | redpanda:
redpanda_1  |   admin:
redpanda_1  |     - address: 0.0.0.0
redpanda_1  |       port: 9644
redpanda_1  |   auto_create_topics_enabled: true
redpanda_1  |   data_directory: /var/lib/redpanda/data
redpanda_1  |   developer_mode: true
redpanda_1  |   kafka_api:
redpanda_1  |     - address: 0.0.0.0
redpanda_1  |       port: 9092
redpanda_1  |   node_id: 0
redpanda_1  |   rpc_server:
redpanda_1  |     address: 0.0.0.0
redpanda_1  |     port: 33145
redpanda_1  |   seed_servers: []
redpanda_1  | rpk:
redpanda_1  |   coredump_dir: /var/lib/redpanda/coredump
redpanda_1  |   enable_memory_locking: false
redpanda_1  |   enable_usage_stats: true
redpanda_1  |   overprovisioned: true
redpanda_1  |   tune_aio_events: false
redpanda_1  |   tune_clocksource: false
redpanda_1  |   tune_coredump: false
redpanda_1  |   tune_cpu: false
redpanda_1  |   tune_disk_irq: false
redpanda_1  |   tune_disk_nomerges: false
redpanda_1  |   tune_disk_scheduler: false
redpanda_1  |   tune_disk_write_cache: false
redpanda_1  |   tune_fstrim: false
redpanda_1  |   tune_network: false
redpanda_1  |   tune_swappiness: false
redpanda_1  |   tune_transparent_hugepages: false
redpanda_1  | schema_registry: {}
redpanda_1  | 
redpanda_1  | 
redpanda_1  | INFO  2021-08-23 21:28:14,034 [shard 0] redpanda::main - application.cc:283 - Use `rpk config set <cfg> <value>` to change values below:
redpanda_1  | INFO  2021-08-23 21:28:14,034 [shard 0] redpanda::main - application.cc:288 - redpanda.advertised_kafka_api:{}    - Address of Kafka API published to the clients
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.full_raft_configuration_recovery_pattern:{}    - Recovery raft configuration on start for NTPs matching pattern
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_min_depth:1  - Minimum queue depth used in kafka queue depth control.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_idle_depth:10    - Queue depth when idleness is detected in kafka queue depth control.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_max_latency_ms:80    - Max latency threshold for kafka queue depth control depth tracking.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_depth_alpha:0.8  - Smoothing factor for kafka queue depth control depth tracking.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_enable:0 - Enable kafka queue depth control.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_window_count:12  - Number of windows used in kafka queue depth control latency tracking.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.superusers:{}  - List of superuser usernames
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_manifest_upload_timeout_ms:10000 - Manifest upload timeout (ms)
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_segment_upload_timeout_ms:30000  - Log segment upload timeout (ms)
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_initial_backoff_ms:100   - Initial backoff time for exponetial backoff algorithm (ms)
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_trust_file:{nullopt} - Path to certificate that should be used to validate server certificate during TLS handshake
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_api_endpoint_port:443    - TLS port override
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_disable_tls:0    - Disable TLS for all S3 connections
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_reconciliation_interval_ms:10000 - Interval at which the archival service runs reconciliation (ms)
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_api_endpoint:{nullopt}   - Optional API endpoint
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_secret_key:{nullopt} - AWS secret key
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.members_backend_retry_ms:5000  - Time between members backend reconciliation loop retries 
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.compaction_ctrl_max_shares:1000    - maximum number of IO and CPU shares that compaction process can use
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.compaction_ctrl_min_shares:10  - minimum number of IO and CPU shares that compaction process can use
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.compaction_ctrl_d_coeff:0.2    - derivative coefficient for compaction PID controller.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.compaction_ctrl_i_coeff:0  - integral coefficient for compaction PID controller.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_depth_update_ms:7000 - Update frequency for kafka queue depth control.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.compaction_ctrl_p_coeff:-12.5  - proportional coefficient for compaction PID controller. This has to be negative since compaction backlog should decrease when number of compaction shares increases
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.compaction_ctrl_update_interval_ms:30000   - 
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.id_allocator_batch_size:1000   - Id allocator allocates messages in batches (each batch is a one log record) and then serves requests from memory without touching the log until the batch is exhausted.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.id_allocator_log_capacity:100  - Capacity of the id_allocator log in number of messages. Once it reached id_allocator_stm should compact the log.
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.reclaim_max_size:4194304   - Maximum batch cache reclaim size
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.raft_transfer_leader_recovery_timeout_ms:10000 - Timeout waiting for follower recovery when transferring leadership
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.fetch_session_eviction_timeout_ms:60000    - Minimum time before which unused session will get evicted from sessions. Maximum time after which inactive session will be deleted is two time given configuration valuecache
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.rm_sync_timeout_ms:2000    - Time to wait state catch up before rejecting a request
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.max_compacted_log_segment_size:5368709120  - Max compacted segment size after consolidation
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.kvstore_flush_interval:10  - Key-value store flush interval (ms)
redpanda_1  | INFO  2021-08-23 21:28:14,035 [shard 0] redpanda::main - application.cc:288 - redpanda.release_cache_on_segment_roll:0    - Free cache when segments roll
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.retention_bytes:{nullopt}  - max bytes per partition on disk before triggering a compaction
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.join_retry_timeout_ms:5000 - Time between cluster join retries in milliseconds
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.zstd_decompress_workspace_bytes:8388608    - Size of the zstd decompression workspace
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.compaction_ctrl_backlog_size:{nullopt} - target backlog size for compaction controller. if not set compaction target compaction backlog would be equal to 
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.auto_create_topics_enabled:1   - Allow topic auto creation
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.reclaim_stable_window:10000    - Length of time above which growth is reset
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.raft_timeout_now_timeout_ms:1000   - Timeout for a timeout now request
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.recovery_append_timeout_ms:5000    - Timeout for append entries requests issued while updating stale follower
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.replicate_append_timeout_ms:3000   - Timeout for append entries requests issued while replicating entries
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.disable_batch_cache:0  - Disable batch cache in log manager
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.log_segment_size:1073741824    - How large in bytes should each log segment be (default 1G)
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.election_timeout_ms:1500   - Election timeout expressed in milliseconds
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.data_directory:{data_directory="/var/lib/redpanda/data"}   - Place where redpanda will keep the data
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.default_topic_partitions:1 - Default number of partitions per topic
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.wait_for_leader_timeout_ms:5000    - Timeout (ms) to wait for leadership in metadata cache
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.transaction_coordinator_cleanup_policy:compact - Cleanup policy for a transaction coordinator topic
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.max_kafka_throttle_delay_ms:60000  - Fail-safe maximum throttle delay on kafka requests
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.enable_idempotence:0   - Enable idempotent producer
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.id_allocator_replication:1 - Replication factor for an id allocator topic
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.default_topic_replications:1   - Default replication factor for new topics
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_api:{{:{host: 0.0.0.0, port: 9092}}} - Address and port of an interface to listen for Kafka API requests
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.group_topic_partitions:1   - Number of partitions in the internal group membership topic
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.log_compaction_interval_ms:10000   - How often do we trigger background compaction
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_enabled:0    - Enable archival storage
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.delete_retention_ms:604800000  - delete segments older than this - default 1 week
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_bucket:{nullopt} - AWS bucket that should be used to store data
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.enable_transactions:0  - Enable transactions
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_access_key:{nullopt} - AWS access key
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.tx_timeout_delay_ms:1000   - Delay before scheduling next check for timed out transactions
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.transactional_id_expiration_ms:604800000   - Producer ids are expired once this time has elapsed after the last write with the given producer id.
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.fetch_max_bytes:57671680   - Maximum number of bytes returned in fetch request
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.fetch_reads_debounce_timeout:1 - Time to wait for next read in fetch request when requested min bytes wasn't reached
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.enable_coproc:0    - Enable coprocessing mode
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.log_compression_type:producer  - Default topic compression type
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.log_message_timestamp_type:CreateTime  - Default topic messages timestamp type
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_latency_alpha:0.002  - Smoothing parameter for kafka queue depth control latency tracking.
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.node_id:0  - Unique id identifying a node in the cluster
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.log_cleanup_policy:delete  - Default topic cleanup policy
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.enable_admin_api:1 - Enable the admin API
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.raft_io_timeout_ms:10000   - Raft I/O timeout
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.stm_snapshot_recovery_policy:violation_recovery_policy: 0  - Describes how to recover from an invariant violation happened during reading a stm snapshot
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.tm_violation_recovery_policy:violation_recovery_policy: 0  - Describes how to recover from an invariant violation happened on the transaction coordinator level
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.coproc_max_batch_size:32768    - Maximum amount of bytes to read from one topic read
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.metadata_dissemination_retry_delay_ms:320  - Delay before retry a topic lookup in a shard or other meta tables
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.transaction_coordinator_replication:1  - Replication factor for a transaction coordinator topic
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.group_new_member_join_timeout:30000    - Timeout for new member joins
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_window_size_ms:1500  - Window size for kafka queue depth control latency tracking.
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.admin_api_tls:{}   - TLS configuration for admin HTTP server
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.group_max_session_timeout_ms:300000    - The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. Default quota tracking window size in milliseconds
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.disable_metrics:0  - Disable registering metrics
redpanda_1  | INFO  2021-08-23 21:28:14,036 [shard 0] redpanda::main - application.cc:288 - redpanda.enable_pid_file:1  - Enable pid file. You probably don't want to change this.
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_group_recovery_timeout_ms:30000  - Kafka group recovery timeout expressed in milliseconds
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.reclaim_growth_window:3000 - Length of time in which reclaim sizes grow
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.dashboard_dir:{nullopt}    - serve http dashboard on / url
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.group_min_session_timeout_ms:6000  - The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.rack:{nullopt} - Rack identifier
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_qdc_max_depth:100    - Maximum queue depth used in kafka queue depth control.
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.cluster_id:{nullopt}   - Cluster identifier
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.reclaim_min_size:131072    - Minimum batch cache reclaim size
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_max_connections:20   - Max number of simultaneous uploads to S3
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.raft_replicate_batch_window_size:1048576   - Max size of requests cached for replication
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.coproc_max_ingest_bytes:655360 - Maximum amount of data to hold from input logs in memory
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.rpc_server_tls:{ enabled: 0 key/cert files: {nullopt} ca file: {nullopt} client_auth_required: 0 } - TLS configuration for RPC server
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.alter_topic_cfg_timeout_ms:5000    - Time to wait for entries replication in controller log when executing alter configuration requst
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.rm_violation_recovery_policy:violation_recovery_policy: 0  - Describes how to recover from an invariant violation happened on the partition level
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.quota_manager_gc_sec:30000 - Quota manager GC frequency in milliseconds
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.seed_server_meta_topic_partitions:7    - Number of partitions in internal raft metadata topic
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.metadata_dissemination_interval_ms:3000    - Interaval for metadata dissemination batching
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.default_num_windows:10 - Default number of quota tracking windows
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.tm_sync_timeout_ms:2000    - Time to wait state catch up before rejecting a request
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.controller_backend_housekeeping_interval_ms:1000   - Interval between iterations of controller backend housekeeping loop
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.raft_heartbeat_timeout_ms:3000 - raft heartbeat RPC timeout
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.min_version:0  - minimum redpanda compat version
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.kafka_api_tls:{}   - TLS configuration for Kafka API endpoint
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.raft_learner_recovery_rate:104857600   - Raft learner recovery rate limit in bytes per sec
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.seed_servers:{}    - List of the seed servers used to join current cluster. If the seed_server list is empty the node will be a cluster root and it will form a new cluster
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.advertised_rpc_api:{nullopt}   - Address of RPC endpoint published to other cluster members
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.enable_sasl:0  - Enable SASL authentication for Kafka connections.
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.kvstore_max_segment_size:16777216  - Key-value maximum segment size (bytes)
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.group_initial_rebalance_delay:300  - Extra delay (ms) added to rebalance phase to wait for new members
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.coproc_supervisor_server:{host: 127.0.0.1, port: 43189}    - IpAddress and port for supervisor service
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.coproc_offset_flush_interval_ms:300000 - Interval for which all coprocessor offsets are flushed to disk
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.segment_appender_flush_timeout_ms:1000 - Maximum delay until buffered data is written
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.rpc_server:{host: 0.0.0.0, port: 33145}    - IpAddress and port for RPC server
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.admin_api_doc_dir:/usr/share/redpanda/admin-api-doc    - Admin API doc directory
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.developer_mode:1   - Skips most of the checks performed at startup, not recomended for production use
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.coproc_max_inflight_bytes:10485760 - Maximum amountt of inflight bytes when sending data to wasm engine
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.admin:{{:{host: 0.0.0.0, port: 9644}}} - Address and port of admin server
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.target_quota_byte_rate:2147483648  - Target quota byte rate (bytes per second) - 2GB default
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.create_topic_timeout_ms:2000   - Timeout (ms) to wait for new topic creation
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.metadata_dissemination_retries:30  - Number of attempts of looking up a topic's meta data like shard before failing a request
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.enable_auto_rebalance_on_node_add:0    - Enable automatic partition rebalancing when new nodes are added
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.node_management_operation_timeout_ms:5000  - Timeout for executing node management operations
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.use_scheduling_groups:0    - Manage CPU scheduling
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.default_window_sec:1000    - Default quota tracking window size in milliseconds
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.max_version:1  - max redpanda compat version
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.raft_heartbeat_interval_ms:150 - Milliseconds for raft leader heartbeats
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.readers_cache_eviction_timeout_ms:30000    - Duration after which inactive readers will be evicted from cache
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.cloud_storage_region:{nullopt} - AWS region that houses the bucket used for storage
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - redpanda.compacted_log_segment_size:268435456   - How large in bytes should each compacted log segment be (default 256MiB)
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy.advertised_pandaproxy_api:{} - Rest API address and port to publish to client
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy.api_doc_dir:/usr/share/redpanda/proxy-api-doc    - API doc directory
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy.pandaproxy_api_tls:{}    - TLS configuration for Pandaproxy api
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy.pandaproxy_api:{{:{host: 0.0.0.0, port: 8082}}}  - Rest API listen address and port
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.sasl_mechanism:   - The SASL mechanism to use when connecting
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.consumer_heartbeat_interval_ms:500    - Interval (in milliseconds) for consumer heartbeats
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.consumer_rebalance_timeout_ms:2000    - Timeout (in milliseconds) for consumer rebalance
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.consumer_request_max_bytes:1048576    - Max bytes to fetch per request
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.consumer_request_timeout_ms:100   - Interval (in milliseconds) for consumer request timeout
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.scram_username:   - Username to use for SCRAM authentication mechanisms
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.produce_batch_delay_ms:100    - Delay (in milliseconds) to wait before sending batch
redpanda_1  | INFO  2021-08-23 21:28:14,037 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.produce_batch_size_bytes:1048576  - Number of bytes to batch before sending to broker
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.consumer_session_timeout_ms:10000 - Timeout (in milliseconds) for consumer session
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.produce_batch_record_count:1000   - Number of records to batch before sending to broker
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.retry_base_backoff_ms:100 - Delay (in milliseconds) for initial retry backoff
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.retries:5 - Number of times to retry a request to a broker
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.scram_password:   - Password to use for SCRAM authentication mechanisms
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.broker_tls:{ enabled: 0 key/cert files: {nullopt} ca file: {nullopt} client_auth_required: 0 }    - TLS configuration for the brokers
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - pandaproxy_client.brokers:{{host: 0.0.0.0, port: 9092}} - List of address and port of the brokers
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry.api_doc_dir:/usr/share/redpanda/proxy-api-doc   - API doc directory
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry.schema_registry_api_tls:{}  - TLS configuration for Schema Registry API
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry.schema_registry_api:{{:{host: 0.0.0.0, port: 8081}}}    - Schema Registry API listen address and port
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.sasl_mechanism:  - The SASL mechanism to use when connecting
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.consumer_heartbeat_interval_ms:500   - Interval (in milliseconds) for consumer heartbeats
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.consumer_rebalance_timeout_ms:2000   - Timeout (in milliseconds) for consumer rebalance
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.consumer_request_max_bytes:1048576   - Max bytes to fetch per request
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.consumer_request_timeout_ms:100  - Interval (in milliseconds) for consumer request timeout
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.scram_username:  - Username to use for SCRAM authentication mechanisms
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.produce_batch_delay_ms:100   - Delay (in milliseconds) to wait before sending batch
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.produce_batch_size_bytes:1048576 - Number of bytes to batch before sending to broker
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.consumer_session_timeout_ms:10000    - Timeout (in milliseconds) for consumer session
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.produce_batch_record_count:1000  - Number of records to batch before sending to broker
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.retry_base_backoff_ms:100    - Delay (in milliseconds) for initial retry backoff
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.retries:5    - Number of times to retry a request to a broker
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.scram_password:  - Password to use for SCRAM authentication mechanisms
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.broker_tls:{ enabled: 0 key/cert files: {nullopt} ca file: {nullopt} client_auth_required: 0 }   - TLS configuration for the brokers
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] redpanda::main - application.cc:288 - schema_registry_client.brokers:{{host: 0.0.0.0, port: 9092}}    - List of address and port of the brokers
redpanda_1  | INFO  2021-08-23 21:28:14,038 [shard 0] syschecks - Writing pid file "/var/lib/redpanda/data/pid.lock"
redpanda_1  | INFO  2021-08-23 21:28:14,041 [shard 0] storage - directories.h:33 - Checking `/var/lib/redpanda/data` for supported filesystems
redpanda_1  | ERROR 2021-08-23 21:28:14,042 [shard 0] syschecks - Path: `/var/lib/redpanda/data' is not on XFS. This is a non-supported setup. Expect poor performance.
redpanda_1  | INFO  2021-08-23 21:28:14,051 [shard 0] redpanda::main - application.cc:549 - Partition manager started
redpanda_1  | INFO  2021-08-23 21:28:14,058 [shard 0] storage - segment_set.cc:239 - Recovered: {offset_tracker:{term:0, base_offset:150, committed_offset:193, dirty_offset:193}, compacted_segment=0, finished_self_compaction=0, reader={/var/lib/redpanda/data/redpanda/kvstore/0_0/150-0-v1.log, (7383 bytes)}, writer=nullptr, cache=nullptr, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/redpanda/kvstore/0_0/150-0-v1.base_index, offsets:{150}, index:{header_size:64, header_checksum:4894033785906865937, header_bitflags:0, base_offset:{150}, max_offset:{193}, base_timestamp:{timestamp: 1629754076147}, max_timestamp:{timestamp: 1629754081355}, index(1,1,1)}, step:32768, needs_persistence:0}}
redpanda_1  | INFO  2021-08-23 21:28:14,059 [shard 0] kvstore - kvstore.cc:497 - Replaying segment with base offset 150
redpanda_1  | INFO  2021-08-23 21:28:14,059 [shard 0] kvstore - kvstore.cc:524 - Removing old segment with base offset 126
redpanda_1  | INFO  2021-08-23 21:28:14,062 [shard 0] cluster - raft0_utils.h:32 - Current node is cluster root
redpanda_1  | INFO  2021-08-23 21:28:14,069 [shard 0] storage - segment_set.cc:239 - Recovered: {offset_tracker:{term:7, base_offset:22, committed_offset:29, dirty_offset:29}, compacted_segment=0, finished_self_compaction=0, reader={/var/lib/redpanda/data/redpanda/controller/0_0/22-7-v1.log, (912 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/redpanda/controller/0_0/22-7-v1.base_index, offsets:{22}, index:{header_size:64, header_checksum:4533295671171083717, header_bitflags:0, base_offset:{22}, max_offset:{29}, base_timestamp:{timestamp: 1629754076170}, max_timestamp:{timestamp: 1629754081343}, index(1,1,1)}, step:32768, needs_persistence:0}}
redpanda_1  | INFO  2021-08-23 21:28:14,070 [shard 0] raft - [group_id:0, {redpanda/controller/0}] consensus.cc:909 - Starting
redpanda_1  | INFO  2021-08-23 21:28:14,070 [shard 0] raft - [group_id:0, {redpanda/controller/0}] consensus.cc:1169 - Recovered persistent state from kvstore: voted for: {id: {0}, revision: {0}}, term: 7
redpanda_1  | INFO  2021-08-23 21:28:14,070 [shard 0] raft - [group_id:0, {redpanda/controller/0}] consensus.cc:941 - Starting with voted_for {id: {0}, revision: {0}} term 7 initial_state false
redpanda_1  | INFO  2021-08-23 21:28:14,070 [shard 0] raft - [group_id:0, {redpanda/controller/0}] consensus.cc:966 - Read bootstrap state: data_seen 1 config_seen 0 eol false commit 29 term 0 prev_idx 0 prev_term 0 config_tracker -9223372036854775808 commit_base_tracker 29 config {nullopt}
redpanda_1  | INFO  2021-08-23 21:28:14,070 [shard 0] raft - [group_id:0, {redpanda/controller/0}] consensus.cc:967 - Current log offsets: {start_offset:0, committed_offset:29, committed_offset_term:7, dirty_offset:29, dirty_offset_term:7, last_term_start_offset:22}
redpanda_1  | INFO  2021-08-23 21:28:14,070 [shard 0] raft - [group_id:0, {redpanda/controller/0}] consensus.cc:985 - Truncating configurations at 29
redpanda_1  | INFO  2021-08-23 21:28:14,080 [shard 0] storage - segment.cc:629 - Creating new segment /var/lib/redpanda/data/redpanda/kvstore/0_0/194-0-v1.log
redpanda_1  | INFO  2021-08-23 21:28:14,093 [shard 0] raft - [group_id:0, {redpanda/controller/0}] consensus.cc:1048 - started raft, log offsets: {start_offset:0, committed_offset:29, committed_offset_term:7, dirty_offset:29, dirty_offset_term:7, last_term_start_offset:22}, term: 7, configuration: {current: {voters: {{id: {0}, revision: {0}}}, learners: {}}, old:{nullopt}, revision: 0, brokers: {{id: 0, kafka_advertised_listeners: {{:{host: 0.0.0.0, port: 9092}}}, rpc_address: {host: 0.0.0.0, port: 33145}, rack: {nullopt}, properties: {cores 1, mem_available 0, disk_available 0}, membership_state: active}}}
redpanda_1  | INFO  2021-08-23 21:28:14,093 [shard 0] cluster - state_machine.cc:29 - Starting state machine
redpanda_1  | INFO  2021-08-23 21:28:14,093 [shard 0] cluster - members_manager.cc:72 - starting cluster::members_manager...
redpanda_1  | INFO  2021-08-23 21:28:14,103 [shard 0] raft - [group_id:0, {redpanda/controller/0}] vote_stm.cc:244 - became the leader term:8
redpanda_1  | INFO  2021-08-23 21:28:14,103 [shard 0] storage - segment.cc:629 - Creating new segment /var/lib/redpanda/data/redpanda/controller/0_0/30-8-v1.log
redpanda_1  | INFO  2021-08-23 21:28:14,252 [shard 0] storage - segment_set.cc:239 - Recovered: {offset_tracker:{term:1, base_offset:0, committed_offset:2, dirty_offset:2}, compacted_segment=0, finished_self_compaction=0, reader={/var/lib/redpanda/data/kafka/test/0_25/0-1-v1.log, (323 bytes)}, writer=nullptr, cache={cache_size=0}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/test/0_25/0-1-v1.base_index, offsets:{0}, index:{header_size:64, header_checksum:6019489771498303336, header_bitflags:0, base_offset:{0}, max_offset:{2}, base_timestamp:{timestamp: 1629754076891}, max_timestamp:{timestamp: 1629754089681}, index(1,1,1)}, step:32768, needs_persistence:0}}
redpanda_1  | INFO  2021-08-23 21:28:14,252 [shard 0] raft - [group_id:4, {kafka/test/0}] consensus.cc:909 - Starting
redpanda_1  | INFO  2021-08-23 21:28:14,252 [shard 0] raft - [group_id:4, {kafka/test/0}] consensus.cc:1169 - Recovered persistent state from kvstore: voted for: {id: {0}, revision: {25}}, term: 1
redpanda_1  | INFO  2021-08-23 21:28:14,252 [shard 0] raft - [group_id:4, {kafka/test/0}] consensus.cc:941 - Starting with voted_for {id: {0}, revision: {25}} term 1 initial_state false
redpanda_1  | INFO  2021-08-23 21:28:14,252 [shard 0] raft - [group_id:4, {kafka/test/0}] consensus.cc:966 - Read bootstrap state: data_seen 2 config_seen 0 eol false commit 2 term 0 prev_idx 1 prev_term 0 config_tracker -9223372036854775808 commit_base_tracker 2 config {nullopt}
redpanda_1  | INFO  2021-08-23 21:28:14,252 [shard 0] raft - [group_id:4, {kafka/test/0}] consensus.cc:967 - Current log offsets: {start_offset:0, committed_offset:2, committed_offset_term:1, dirty_offset:2, dirty_offset_term:1, last_term_start_offset:0}
redpanda_1  | INFO  2021-08-23 21:28:14,252 [shard 0] raft - [group_id:4, {kafka/test/0}] consensus.cc:985 - Truncating configurations at 2
redpanda_1  | INFO  2021-08-23 21:28:14,274 [shard 0] raft - [group_id:4, {kafka/test/0}] consensus.cc:1048 - started raft, log offsets: {start_offset:0, committed_offset:2, committed_offset_term:1, dirty_offset:2, dirty_offset_term:1, last_term_start_offset:0}, term: 1, configuration: {current: {voters: {{id: {0}, revision: {25}}}, learners: {}}, old:{nullopt}, revision: 25, brokers: {{id: 0, kafka_advertised_listeners: {{:{host: kubernetes.docker.internal, port: 9092}}}, rpc_address: {host: 0.0.0.0, port: 33145}, rack: {nullopt}, properties: {cores 1, mem_available 0, disk_available 0}, membership_state: active}}}
redpanda_1  | INFO  2021-08-23 21:28:14,274 [shard 0] redpanda::main - application.cc:916 - Started RPC server listening at {host: 0.0.0.0, port: 33145}
redpanda_1  | INFO  2021-08-23 21:28:14,274 [shard 0] redpanda::main - application.cc:969 - Started Kafka API server listening at {{:{host: 0.0.0.0, port: 9092}}}
redpanda_1  | INFO  2021-08-23 21:28:14,275 [shard 0] admin_api_server - admin_server.cc:97 - Started HTTP admin service listening at {{:{host: 0.0.0.0, port: 9644}}}
redpanda_1  | INFO  2021-08-23 21:28:14,275 [shard 0] redpanda::main - application.cc:824 - Started Pandaproxy listening at {{:{host: 0.0.0.0, port: 8082}}}
redpanda_1  | INFO  2021-08-23 21:28:14,275 [shard 0] redpanda::main - application.cc:834 - Started Schema Registry listening at {{:{host: 0.0.0.0, port: 8081}}}
redpanda_1  | INFO  2021-08-23 21:28:14,275 [shard 0] redpanda::main - application.cc:839 - Successfully started Redpanda!
redpanda_1  | INFO  2021-08-23 21:28:14,286 [shard 0] raft - [group_id:4, {kafka/test/0}] vote_stm.cc:244 - became the leader term:2
redpanda_1  | INFO  2021-08-23 21:28:14,286 [shard 0] storage - segment.cc:629 - Creating new segment /var/lib/redpanda/data/kafka/test/0_25/3-2-v1.log
redpanda_1  | INFO  2021-08-23 21:28:36,460 [shard 0] storage - disk_log_impl.cc:847 - gc[time_based_retention] - tombstone & delete segment: {offset_tracker:{term:1, base_offset:0, committed_offset:2, dirty_offset:2}, compacted_segment=0, finished_self_compaction=0, reader={/var/lib/redpanda/data/kafka/test/0_25/0-1-v1.log, (323 bytes)}, writer=nullptr, cache={cache_size=2}, compaction_index:nullopt, closed=0, tombstone=0, index={file:/var/lib/redpanda/data/kafka/test/0_25/0-1-v1.base_index, offsets:{0}, index:{header_size:64, header_checksum:6019489771498303336, header_bitflags:0, base_offset:{0}, max_offset:{2}, base_timestamp:{timestamp: 1629754076891}, max_timestamp:{timestamp: 1629754089681}, index(1,1,1)}, step:32768, needs_persistence:0}}
redpanda_1  | INFO  2021-08-23 21:28:36,461 [shard 0] storage - segment.cc:104 - removing: {"/var/lib/redpanda/data/kafka/test/0_25/0-1-v1.log", "/var/lib/redpanda/data/kafka/test/0_25/0-1-v1.base_index"}
redpanda_1  | INFO  2021-08-23 21:29:12,427 [shard 0] kafka - connection_context.cc:280 - Detected error processing request: std::runtime_error (Reader cannot read before start of the log 2 < 3)
redpanda_1  | INFO  2021-08-23 21:29:14,428 [shard 0] kafka - connection_context.cc:280 - Detected error processing request: std::runtime_error (Reader cannot read before start of the log 2 < 3)
redpanda_1  | INFO  2021-08-23 21:29:16,430 [shard 0] kafka - connection_context.cc:280 - Detected error processing request: std::runtime_error (Reader cannot read before start of the log 2 < 3)
redpanda_1  | INFO  2021-08-23 21:29:17,059 [shard 0] kafka - connection_context.cc:280 - Detected error processing request: std::runtime_error (Reader cannot read before start of the log 2 < 3)

JIRA Link: CORE-723

emaxerrno commented 3 years ago

Great find! Thanks @cscorley !

emaxerrno commented 3 years ago

@cscorley can you try with v21.8.1-beta4 image. We have a new release coming Tuesday with a ton of fixes in this area

cscorley commented 3 years ago

@senior7515 I actually was able to also try this on the v21.8.1-beta5 image late yesterday. Same result.

emaxerrno commented 3 years ago

Thanks. Looking into this.

twmb commented 2 years ago

Is this specific to rpk or for all clients? It's possible rpk was not configured to reset properly on out of range errors. A bunch of changes for rpk have gone in that should handle this case correctly now. We can try to recreate the above in rpk, but if the issue is resolved, we'll close this issue.

cscorley commented 2 years ago

I was able to reproduce this on several clients, as stated above. My example used rpk to allow your team to reproduce yourselves. We have chosen not to use Redpanda, so I will not be testing this. Thanks!