Open chris-ng-scmp opened 5 years ago
Hey @chris-ng-scmp! I'm going to try to repro this tomorrow and see how it goes. Maybe a couple of quick questions:
Hey @chris-ng-scmp! I'm going to try to repro this tomorrow and see how it goes. Maybe a couple of quick questions:
- Which version of yugabyte were you installing?
- When did the server start logging this? Were you visiting some UI page on the master? Any chance you can upload the whole master log file, for the one spewing these logs?
Both helm chart version 2.0.0
and 1.3.0
will get the logs
The log started to show at a very early stage when a master pod container start
kubectl logs yb-master-0
I1010 03:09:17.839452 1 master_main.cc:93] NumCPUs determined to be: 2
I1010 03:09:17.926501 1 server_base_options.cc:219] Resolved master addresses: [10.100.1.176:7100, 10.100.1.37:7100, 10.100.2.52:7100]
I1010 03:09:17.926896 1 mem_tracker.cc:250] MemTracker: hard memory limit is 6.000000 GB
I1010 03:09:17.926911 1 mem_tracker.cc:252] MemTracker: soft memory limit is 5.100000 GB
I1010 03:09:17.935341 1 master_main.cc:114] Initializing master server...
I1010 03:09:17.942512 1 fs_manager.cc:249] Opened local filesystem: /mnt/data0
uuid: "47411a36df4a420c91d89988a3200b8c"
format_stamp: "Formatted at 2019-10-02 07:08:48 on yb-master-0"
I1010 03:09:17.942876 1 server_base.cc:223] Auto setting FLAGS_num_reactor_threads to 2
I1010 03:09:17.981986 1 master_main.cc:117] Starting Master server...
I1010 03:09:17.995929 1 webserver.cc:147] Starting webserver on 0.0.0.0:7000
I1010 03:09:17.995959 1 webserver.cc:152] Document root: /home/yugabyte/www
I1010 03:09:18.004554 1 webserver.cc:239] Webserver started. Bound to: http://0.0.0.0:7000/
I1010 03:09:18.032287 1 rpc_server.cc:167] RPC server started. Bound to: 10.100.1.176:7100
I1010 03:09:18.032356 1 server_base.cc:477] Using private ip address 10.100.1.176
I1010 03:09:18.046052 20 sys_catalog.cc:194] Trying to load previous SysCatalogTable data from disk
I1010 03:09:18.087517 21 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1010 03:09:18.242916 20 consensus_meta.cc:275] T 00000000000000000000000000000000 P 47411a36df4a420c91d89988a3200b8c: Updating active role from UNKNOWN_ROLE to FOLLOWER. Consensus state: current_term: 5 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "47411a36df4a420c91d89988a3200b8c" member_type: VOTER last_known_private_addr { host: "10.100.1.162" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "53499a85781e4646a1d94247cd20329e" member_type: VOTER last_known_private_addr { host: "10.100.1.27" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "c447ab927e914f038af4c51841403506" member_type: VOTER last_known_private_addr { host: "10.100.2.43" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1010 03:09:18.244333 20 sys_catalog.cc:239] Configuring consensus for distributed operation...
I1010 03:09:18.244407 20 server_base.cc:477] Using private ip address 10.100.1.176
I1010 03:09:18.244783 20 tablet_peer.cc:1026] T 00000000000000000000000000000000 P 47411a36df4a420c91d89988a3200b8c [state=BOOTSTRAPPING]: Changed state from NOT_STARTED to BOOTSTRAPPING
I1010 03:09:18.263041 20 consensus_meta.cc:275] T 00000000000000000000000000000000 P 47411a36df4a420c91d89988a3200b8c: Updating active role from UNKNOWN_ROLE to FOLLOWER. Consensus state: current_term: 5 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "47411a36df4a420c91d89988a3200b8c" member_type: VOTER last_known_private_addr { host: "10.100.1.162" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "53499a85781e4646a1d94247cd20329e" member_type: VOTER last_known_private_addr { host: "10.100.1.27" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "c447ab927e914f038af4c51841403506" member_type: VOTER last_known_private_addr { host: "10.100.2.43" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1010 03:09:18.263110 20 tablet_bootstrap_if.cc:75] T 00000000000000000000000000000000 P 47411a36df4a420c91d89988a3200b8c: Bootstrap starting.
I1010 03:09:18.321290 20 docdb_rocksdb_util.cc:412] Auto setting FLAGS_rocksdb_max_background_flushes to 1
I1010 03:09:18.321339 20 docdb_rocksdb_util.cc:432] Auto setting FLAGS_rocksdb_max_background_compactions to 1
I1010 03:09:18.321344 20 docdb_rocksdb_util.cc:441] Auto setting FLAGS_rocksdb_base_background_compactions to 1
@bmatican
@chris-ng-scmp I am not able to repro this locally using helm and minikube...
Are you deploying this on a k8s cluster or locally, on minikube? Any chance there are some external processes, such as load balancers doing health checks, that might be hitting our webserver endpoints?
Separately, if you have a longer piece of the log, say, covering 1-2hrs, so I can also see if there's some pattern in when / how many times these are being logged, I would appreciate if you could share those!
@bmatican Thanks for checking, and I just tried again using this yaml https://raw.githubusercontent.com/YugaByte/yugabyte-db/master/cloud/kubernetes/yugabyte-statefulset.yaml on both Minikube and a K8S cluster provided by AliCloud (with only different on storageClassName)
No such error on Minikube but appear on Alicloud only, I haven't added any health check.
Attached a longer log for you
kubectl logs -f yb-master-0
I1014 02:19:44.952785 1 master_main.cc:93] NumCPUs determined to be: 2
I1014 02:19:44.953732 1 mem_tracker.cc:250] MemTracker: hard memory limit is 0.763815 GB
I1014 02:19:44.953765 1 mem_tracker.cc:252] MemTracker: soft memory limit is 0.649243 GB
I1014 02:19:44.957202 1 master_main.cc:114] Initializing master server...
I1014 02:19:44.957572 1 server_base.cc:438] Could not load existing FS layout: Not found (yb/util/env_posix.cc:1405): /mnt/data0/yb-data/master/instance: No such file or directory (system error 2)
I1014 02:19:44.957603 1 server_base.cc:439] Creating new FS layout
I1014 02:19:44.967784 1 fs_manager.cc:461] Generated new instance metadata in path /mnt/data0/yb-data/master/instance:
uuid: "5e4ad9f98611428a84ecfdbd7f0c100e"
format_stamp: "Formatted at 2019-10-14 02:19:44 on yb-master-0"
I1014 02:19:44.970082 1 fs_manager.cc:249] Opened local filesystem: /mnt/data0
uuid: "5e4ad9f98611428a84ecfdbd7f0c100e"
format_stamp: "Formatted at 2019-10-14 02:19:44 on yb-master-0"
I1014 02:19:44.970518 1 server_base.cc:223] Auto setting FLAGS_num_reactor_threads to 2
I1014 02:19:44.972756 1 master_main.cc:117] Starting Master server...
I1014 02:19:44.975607 1 webserver.cc:147] Starting webserver on 0.0.0.0:7000
I1014 02:19:44.975632 1 webserver.cc:152] Document root: /home/yugabyte/www
I1014 02:19:44.976055 1 webserver.cc:239] Webserver started. Bound to: http://0.0.0.0:7000/
I1014 02:19:44.976325 1 rpc_server.cc:167] RPC server started. Bound to: 10.100.1.43:7100
I1014 02:19:44.976384 1 server_base.cc:477] Using private ip address yb-master-0.yb-masters.technology-system.svc.cluster.local
I1014 02:19:44.979310 20 sys_catalog.cc:260] Creating new SysCatalogTable data
I1014 02:19:44.986001 20 sys_catalog.cc:310] Determining permanent_uuid for [yb-master-0.yb-masters.technology-system.svc.cluster.local:7100]
I1014 02:19:44.988683 22 server_base.cc:477] Using private ip address yb-master-0.yb-masters.technology-system.svc.cluster.local
I1014 02:19:44.989342 20 sys_catalog.cc:310] Determining permanent_uuid for [yb-master-1.yb-masters.technology-system.svc.cluster.local:7100]
I1014 02:19:44.994350 20 sys_catalog.cc:310] Determining permanent_uuid for [yb-master-2.yb-masters.technology-system.svc.cluster.local:7100]
I1014 02:19:44.999428 20 sys_catalog.cc:325] Setting up raft configuration: opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }
I1014 02:19:44.999688 20 consensus_meta.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Updating active role from UNKNOWN_ROLE to FOLLOWER. Consensus state: current_term: 0 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1014 02:19:45.004758 20 server_base.cc:477] Using private ip address yb-master-0.yb-masters.technology-system.svc.cluster.local
I1014 02:19:45.004969 20 tablet_peer.cc:1026] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [state=BOOTSTRAPPING]: Changed state from NOT_STARTED to BOOTSTRAPPING
I1014 02:19:45.005442 20 consensus_meta.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Updating active role from UNKNOWN_ROLE to FOLLOWER. Consensus state: current_term: 0 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1014 02:19:45.005482 20 tablet_bootstrap_if.cc:75] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Bootstrap starting.
I1014 02:19:45.006134 20 docdb_rocksdb_util.cc:412] Auto setting FLAGS_rocksdb_max_background_flushes to 1
I1014 02:19:45.006184 20 docdb_rocksdb_util.cc:432] Auto setting FLAGS_rocksdb_max_background_compactions to 1
I1014 02:19:45.006191 20 docdb_rocksdb_util.cc:441] Auto setting FLAGS_rocksdb_base_background_compactions to 1
I1014 02:19:45.006196 20 docdb_rocksdb_util.cc:452] Auto setting FLAGS_priority_thread_pool_size to 1
I1014 02:19:45.006376 20 tablet.cc:439] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Creating RocksDB database in dir /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000
I1014 02:19:45.014241 20 tablet.cc:556] Opening RocksDB at: /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000
I1014 02:19:45.014685 20 db_impl.cc:756] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Creating manifest 1
I1014 02:19:45.020928 20 version_set.cc:2809] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Recovered from manifest file:/mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 1125899906842624, log_number is 0,prev_log_number is 0,max_column_family is 0, flushed_values is <NULL>
I1014 02:19:45.020956 20 version_set.cc:2817] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Column family [default] (ID 0), log number is 0
I1014 02:19:45.023694 20 tablet.cc:607] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Successfully opened a RocksDB database at /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000, obj: 0x56e8000
I1014 02:19:45.023751 20 tablet_bootstrap.cc:420] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Time spent opening tablet: real 0.018s user 0.003s sys 0.001s
I1014 02:19:45.023891 20 tablet_bootstrap.cc:358] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: No blocks or log segments found. Creating new log.
I1014 02:19:45.024046 20 log.cc:801] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Setting log wal retention time to 0 seconds
I1014 02:19:45.024262 20 log.cc:414] durable_wal_write is turned on.
I1014 02:19:45.027954 20 tablet_bootstrap_if.cc:75] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: No bootstrap required, opened a new log
I1014 02:19:45.028010 20 log.cc:801] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Setting log wal retention time to 0 seconds
I1014 02:19:45.030652 20 consensus_meta.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Updating active role from UNKNOWN_ROLE to FOLLOWER. Consensus state: current_term: 0 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1014 02:19:45.031893 20 consensus_meta.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Updating active role from FOLLOWER to FOLLOWER. Consensus state: current_term: 0 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1014 02:19:45.031951 20 raft_consensus.cc:354] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending operations. Active config: opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }
I1014 02:19:45.032032 20 raft_consensus.cc:384] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Consensus starting up: Expiring fail detector timer to make a prompt election more likely
I1014 02:19:45.032125 20 raft_consensus.cc:875] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5e4ad9f98611428a84ecfdbd7f0c100e, State: 1, Role: FOLLOWER, Watermarks: {Received: { term: 0 index: 0 } Committed: { term: 0 index: 0 }} Leader: { term: 0 index: 0 }, new leader: , initial_fd_wait: 0.313s
I1014 02:19:45.032171 20 consensus_meta.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Updating active role from FOLLOWER to FOLLOWER. Consensus state: current_term: 0 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1014 02:19:45.032274 20 consensus_queue.cc:204] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated op: 0.0, Majority replicated op: 0.0, Committed index: 0.0, Last appended: 0.0, Current term: 0, Majority size: -1, State: QUEUE_OPEN, Mode: NON_LEADER
I1014 02:19:45.032297 20 raft_consensus.cc:2795] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Calling mark dirty synchronously for reason code CONSENSUS_STARTED
I1014 02:19:45.032351 20 sys_catalog.cc:351] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: SysCatalogTable state changed. Locked=0. Reason: RaftConsensus started. Latest consensus state: current_term: 0 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
I1014 02:19:45.032375 20 sys_catalog.cc:355] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: This master's current role is: FOLLOWER
I1014 02:19:45.032399 20 tablet_peer.cc:1026] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [state=RUNNING]: Changed state from BOOTSTRAPPING to RUNNING
I1014 02:19:45.032429 20 sys_catalog.cc:351] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: SysCatalogTable state changed. Locked=0. Reason: Started TabletPeer. Latest consensus state: current_term: 0 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
I1014 02:19:45.032436 20 sys_catalog.cc:355] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: This master's current role is: FOLLOWER
I1014 02:19:45.032918 20 db_impl.cc:2452] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: SetOptions() on column family [default], inputs:
I1014 02:19:45.032943 20 db_impl.cc:2455] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_stop_writes_trigger: 48
I1014 02:19:45.032948 20 db_impl.cc:2455] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_slowdown_writes_trigger: 24
I1014 02:19:45.032953 20 db_impl.cc:2460] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] SetOptions succeeded
I1014 02:19:45.032961 20 mutable_cf_options.cc:83] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: write_buffer_size: 134217728
I1014 02:19:45.032966 20 mutable_cf_options.cc:85] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_write_buffer_number: 2
I1014 02:19:45.032970 20 mutable_cf_options.cc:87] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: arena_block_size: 131072
I1014 02:19:45.032975 20 mutable_cf_options.cc:89] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_bits: 0
I1014 02:19:45.032979 20 mutable_cf_options.cc:91] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_probes: 6
I1014 02:19:45.032984 20 mutable_cf_options.cc:93] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_huge_page_tlb_size: 0
I1014 02:19:45.032987 20 mutable_cf_options.cc:95] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_successive_merges: 0
I1014 02:19:45.032992 20 mutable_cf_options.cc:97] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: filter_deletes: 0
I1014 02:19:45.032996 20 mutable_cf_options.cc:99] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: disable_auto_compactions: 1
I1014 02:19:45.033001 20 mutable_cf_options.cc:101] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: soft_pending_compaction_bytes_limit: 0
I1014 02:19:45.033005 20 mutable_cf_options.cc:103] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: hard_pending_compaction_bytes_limit: 0
I1014 02:19:45.033010 20 mutable_cf_options.cc:105] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_file_num_compaction_trigger: 5
I1014 02:19:45.033013 20 mutable_cf_options.cc:107] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_slowdown_writes_trigger: 24
I1014 02:19:45.033018 20 mutable_cf_options.cc:109] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_stop_writes_trigger: 48
I1014 02:19:45.033022 20 mutable_cf_options.cc:111] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_grandparent_overlap_factor: 10
I1014 02:19:45.033026 20 mutable_cf_options.cc:113] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: expanded_compaction_factor: 25
I1014 02:19:45.033030 20 mutable_cf_options.cc:115] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: source_compaction_factor: 1
I1014 02:19:45.033035 20 mutable_cf_options.cc:117] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_base: 2097152
I1014 02:19:45.033040 20 mutable_cf_options.cc:119] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_multiplier: 1
I1014 02:19:45.033044 20 mutable_cf_options.cc:121] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_base: 10485760
I1014 02:19:45.033048 20 mutable_cf_options.cc:123] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier: 10
I1014 02:19:45.033054 20 mutable_cf_options.cc:131] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier_additional: 1, 1, 1, 1, 1, 1, 1
I1014 02:19:45.033063 20 mutable_cf_options.cc:133] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: verify_checksums_in_compaction: 1
I1014 02:19:45.033071 20 mutable_cf_options.cc:135] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_sequential_skip_in_iterations: 8
I1014 02:19:45.033552 20 db_impl.cc:2452] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: SetOptions() on column family [default], inputs:
I1014 02:19:45.033577 20 db_impl.cc:2455] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: disable_auto_compactions: false
I1014 02:19:45.033583 20 db_impl.cc:2460] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] SetOptions succeeded
I1014 02:19:45.033588 20 mutable_cf_options.cc:83] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: write_buffer_size: 134217728
I1014 02:19:45.033592 20 mutable_cf_options.cc:85] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_write_buffer_number: 2
I1014 02:19:45.033596 20 mutable_cf_options.cc:87] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: arena_block_size: 131072
I1014 02:19:45.033601 20 mutable_cf_options.cc:89] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_bits: 0
I1014 02:19:45.033605 20 mutable_cf_options.cc:91] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_probes: 6
I1014 02:19:45.033610 20 mutable_cf_options.cc:93] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_huge_page_tlb_size: 0
I1014 02:19:45.033614 20 mutable_cf_options.cc:95] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_successive_merges: 0
I1014 02:19:45.033618 20 mutable_cf_options.cc:97] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: filter_deletes: 0
I1014 02:19:45.033624 20 mutable_cf_options.cc:99] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: disable_auto_compactions: 0
I1014 02:19:45.033628 20 mutable_cf_options.cc:101] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: soft_pending_compaction_bytes_limit: 0
I1014 02:19:45.033632 20 mutable_cf_options.cc:103] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: hard_pending_compaction_bytes_limit: 0
I1014 02:19:45.033637 20 mutable_cf_options.cc:105] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_file_num_compaction_trigger: 5
I1014 02:19:45.033641 20 mutable_cf_options.cc:107] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_slowdown_writes_trigger: 24
I1014 02:19:45.033645 20 mutable_cf_options.cc:109] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_stop_writes_trigger: 48
I1014 02:19:45.033650 20 mutable_cf_options.cc:111] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_grandparent_overlap_factor: 10
I1014 02:19:45.033654 20 mutable_cf_options.cc:113] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: expanded_compaction_factor: 25
I1014 02:19:45.033658 20 mutable_cf_options.cc:115] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: source_compaction_factor: 1
I1014 02:19:45.033663 20 mutable_cf_options.cc:117] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_base: 2097152
I1014 02:19:45.033668 20 mutable_cf_options.cc:119] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_multiplier: 1
I1014 02:19:45.033671 20 mutable_cf_options.cc:121] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_base: 10485760
I1014 02:19:45.033675 20 mutable_cf_options.cc:123] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier: 10
I1014 02:19:45.033682 20 mutable_cf_options.cc:131] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier_additional: 1, 1, 1, 1, 1, 1, 1
I1014 02:19:45.033692 20 mutable_cf_options.cc:133] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: verify_checksums_in_compaction: 1
I1014 02:19:45.033696 20 mutable_cf_options.cc:135] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_sequential_skip_in_iterations: 8
I1014 02:19:45.033849 20 sys_catalog.cc:543] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: configured and running, proceeding with master startup.
I1014 02:19:45.034133 1 master_main.cc:120] Master server successfully started.
I1014 02:19:45.036389 1 total_mem_watcher.cc:72] Root memtracker limit: 820140441 (782 MiB); this server will stop if memory usage exceeds 200% of that: 1640280882 bytes (1564 MiB).
I1014 02:19:45.346395 27 raft_consensus.cc:813] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: ReportFailDetected: Starting NORMAL_ELECTION...
I1014 02:19:45.346459 27 raft_consensus.cc:496] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Triggering leader pre-election, mode=NORMAL_ELECTION
I1014 02:19:45.346534 27 raft_consensus.cc:2856] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Snoozing failure detection for 3.076s
I1014 02:19:45.346659 27 raft_consensus.cc:535] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }
I1014 02:19:45.346837 27 leader_election.cc:215] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [CANDIDATE]: Term 1 pre-election: Requesting vote from peer 3a2c988a713346188fe36efbec1d7be1
I1014 02:19:45.346940 27 leader_election.cc:215] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [CANDIDATE]: Term 1 pre-election: Requesting vote from peer 700c7a9791a44c17978609f7dc9fe141
W1014 02:19:45.349385 23 leader_election.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [CANDIDATE]: Term 1 pre-election: RPC error from VoteRequest() call to peer 700c7a9791a44c17978609f7dc9fe141: Remote error (yb/rpc/outbound_call.cc:440): Service unavailable (yb/master/catalog_manager.cc:4733): CatalogManager is not yet initialized
W1014 02:19:45.350082 23 leader_election.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [CANDIDATE]: Term 1 pre-election: RPC error from VoteRequest() call to peer 3a2c988a713346188fe36efbec1d7be1: Remote error (yb/rpc/outbound_call.cc:440): Service unavailable (yb/master/catalog_manager.cc:4733): CatalogManager is not yet initialized
I1014 02:19:45.350096 23 leader_election.cc:240] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate lost.
I1014 02:19:45.350252 27 raft_consensus.cc:2856] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Snoozing failure detection for 3.044s
I1014 02:19:45.350266 27 raft_consensus.cc:2705] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Leader Pre-election lost for term 1. Reason: None given. Originator:
I1014 02:19:45.748499 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:45.817325 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:46.278748 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:46.469602 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:46.512027 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:46.527133 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:46.760659 22 server_base.cc:477] Using private ip address yb-master-0.yb-masters.technology-system.svc.cluster.local
I1014 02:19:47.111382 22 raft_consensus.cc:1980] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Pre-election. Granting vote for candidate 700c7a9791a44c17978609f7dc9fe141 in term 1
I1014 02:19:47.134032 22 raft_consensus.cc:2917] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 0 FOLLOWER]: Advancing to term 1
I1014 02:19:47.140564 22 consensus_meta.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Updating active role from FOLLOWER to FOLLOWER. Consensus state: current_term: 1 leader_uuid: "" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1014 02:19:47.140681 22 raft_consensus.cc:2856] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 1 FOLLOWER]: Snoozing failure detection for 3.178s
I1014 02:19:47.145020 22 raft_consensus.cc:2483] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 700c7a9791a44c17978609f7dc9fe141 in term 1.
I1014 02:19:47.148974 22 raft_consensus.cc:1371] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 1 FOLLOWER]: Refusing update from remote peer 700c7a9791a44c17978609f7dc9fe141: Log matching property violated. Preceding OpId in replica: { term: 0 index: 0 }. Preceding OpId from leader: { term: 1 index: 1 }. (index mismatch)
I1014 02:19:47.150887 22 consensus_meta.cc:275] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Updating active role from FOLLOWER to FOLLOWER. Consensus state: current_term: 1 leader_uuid: "700c7a9791a44c17978609f7dc9fe141" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 0
I1014 02:19:47.150940 22 raft_consensus.cc:2795] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 1 FOLLOWER]: Calling mark dirty synchronously for reason code NEW_LEADER_ELECTED
I1014 02:19:47.150975 22 sys_catalog.cc:351] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: SysCatalogTable state changed. Locked=1. Reason: New leader 700c7a9791a44c17978609f7dc9fe141 elected. Latest consensus state: current_term: 1 leader_uuid: "700c7a9791a44c17978609f7dc9fe141" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
I1014 02:19:47.150985 22 sys_catalog.cc:355] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: This master's current role is: FOLLOWER
I1014 02:19:47.156556 22 raft_consensus.cc:2795] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [term 1 FOLLOWER]: Calling mark dirty synchronously for reason code FOLLOWER_NO_OP_COMPLETE
I1014 02:19:47.156662 22 sys_catalog.cc:351] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: SysCatalogTable state changed. Locked=1. Reason: Replicate of NO_OP complete on follower. Latest consensus state: current_term: 1 leader_uuid: "700c7a9791a44c17978609f7dc9fe141" config { opid_index: -1 peers { permanent_uuid: "5e4ad9f98611428a84ecfdbd7f0c100e" member_type: VOTER last_known_private_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-0.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "3a2c988a713346188fe36efbec1d7be1" member_type: VOTER last_known_private_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-1.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "700c7a9791a44c17978609f7dc9fe141" member_type: VOTER last_known_private_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } last_known_broadcast_addr { host: "yb-master-2.yb-masters.technology-system.svc.cluster.local" port: 7100 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
I1014 02:19:47.156678 22 sys_catalog.cc:355] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [sys.catalog]: This master's current role is: FOLLOWER
I1014 02:19:47.164782 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:47.199283 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:47.332446 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:47.525041 22 db_impl.cc:637] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Shutting down RocksDB at: /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000
I1014 02:19:47.525094 22 db_impl.cc:650] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Flushing mem table on shutdown
I1014 02:19:47.525275 22 db_impl.cc:5307] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] New memtable created with log file: #3
I1014 02:19:47.525802 31 db_impl.cc:3170] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Calling FlushMemTableToOutputFile with column family [default], flush slots scheduled 1, total flush slots 1, compaction slots scheduled 0, compaction tasks [], total compaction slots 1
I1014 02:19:47.525915 31 flush_job.cc:249] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] [JOB 2] Flushing memtable with next log file: 3
I1014 02:19:47.526049 31 event_logger.cc:67] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: EVENT_LOG_v1 {"time_micros": 1571019587526013, "job": 2, "event": "flush_started", "num_memtables": 1, "num_entries": 4, "num_deletes": 0, "memory_usage": 472}
I1014 02:19:47.526072 31 flush_job.cc:277] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] [JOB 2] Level-0 flush table #10: started
I1014 02:19:47.527521 31 flush_job.cc:309] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] [JOB 2] Level-0 flush table #10: 66496 bytes OK
I1014 02:19:47.527590 31 event_logger.cc:67] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: EVENT_LOG_v1 {"time_micros": 1571019587527564, "cf_name": "default", "job": 2, "event": "table_file_creation", "file_number": 10, "file_size": 66496, "table_properties": {"data_size": 155, "data_index_size": 28, "filter_size": 65482, "filter_index_size": 20, "raw_key_size": 218, "raw_average_key_size": 54, "raw_value_size": 12, "raw_average_value_size": 3, "num_data_blocks": 1, "num_entries": 4, "num_filter_blocks": 1, "num_data_index_blocks": 1, "filter_policy_name": "DocKeyHashedComponentsFilter", "kDeletedKeys": "0"}}
I1014 02:19:47.527673 31 version_set.cc:2250] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Creating manifest 11
I1014 02:19:47.527964 31 version_set.cc:3316] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Writing version edit: comparator: "leveldb.BytewiseComparator"
I1014 02:19:47.528183 31 version_set.cc:3316] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Writing version edit: log_number: 0
I1014 02:19:47.528626 31 db_impl.cc:1869] [default] Level summary: files[1] max score 0.20
I1014 02:19:47.528728 31 memtable_list.cc:374] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] Level-0 commit table #10 started
I1014 02:19:47.528738 31 memtable_list.cc:390] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] Level-0 commit table #10: memtable #1 done
I1014 02:19:47.528743 31 event_logger.cc:77] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: EVENT_LOG_v1 {"time_micros": 1571019587528575, "job": 2, "event": "flush_finished", "lsm_state": [1]}
I1014 02:19:47.528817 31 db_impl.cc:1148] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [JOB 2] Delete /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000//MANIFEST-000001 type=4 #1 -- OK
I1014 02:19:47.530087 22 db_impl.cc:745] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Shutdown done
I1014 02:19:47.569468 22 tablet.cc:439] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Creating RocksDB database in dir /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000
I1014 02:19:47.571879 22 tablet.cc:556] Opening RocksDB at: /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000
I1014 02:19:47.575474 22 version_set.cc:2809] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Recovered from manifest file:/mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000/MANIFEST-000011 succeeded,manifest_file_number is 11, next_file_number is 13, last_sequence is 1125899908336064, log_number is 0,prev_log_number is 0,max_column_family is 0, flushed_values is 0x000000000c3e2b70 -> { op_id: { term: 1 index: 33014 } hybrid_time: { physical: 1570229853345880 } history_cutoff: <invalid> }
I1014 02:19:47.575510 22 version_set.cc:2817] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Column family [default] (ID 0), log number is 3
I1014 02:19:47.577966 22 tablet.cc:607] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Successfully opened a RocksDB database at /mnt/data0/yb-data/master/data/rocksdb/table-sys.catalog.uuid/tablet-00000000000000000000000000000000, obj: 0x56e8000
I1014 02:19:47.578011 22 version_set.cc:2250] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Creating manifest 16
I1014 02:19:47.578076 22 version_set.cc:3316] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Writing version edit: comparator: "leveldb.BytewiseComparator"
I1014 02:19:47.578617 22 version_set.cc:3316] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: Writing version edit: log_number: 3
new_files {
level: 0
number: 10
total_file_size: 23806103
base_file_size: 110191
smallest {
key: "H\200\000\000\001S000000010000300080000000000000af\000\000!J\200#\200\001\300 ^\010&^\200J\001\250X\002\000\000\000\004"
seqno: 1125899906842625
user_values {
tag: 1
data: "\200\001\300 ^\222\217\224\200J"
}
user_values {
tag: 10
data: "H\200\000\000\001"
}
user_values {
tag: 11
data: "$"
}
user_values {
tag: 12
data: "$"
}
user_values {
tag: 13
data: "$"
}
user_values {
tag: 14
data: "$"
}
}
largest {
key: "y\016\340@\000\000\000\000\200\0000\000\000\001\000\000\000O\000\0000\252!K\215#\200\001\300 V\212#\367\200>+\001\274\311\026\000\000\000\004"
seqno: 1125899908336064
user_values {
tag: 1
data: "\200\001\300 V\211\347\247\200?\253"
}
user_values {
tag: 10
data: "S\377\374\346w \016D\244\203Mtm\243\336^\343\000\000"
}
user_values {
tag: 11
data: "Sysql-catalog-configuration\000\000"
}
user_values {
tag: 12
data: "Syes_or_no_check\000\000"
}
user_values {
tag: 13
data: "SS\377\374\346w \016D\244\203Mtm\243\336^\343\000\001\000\001!\000\000"
}
user_values {
tag: 14
data: "SS\352\254t\017\241uO\267\242\305\333w\006\007\361\345\000\001\000\001!\000\000"
}
}
}
flushed_frontier {
[type.googleapis.com/yb.docdb.ConsensusFrontierPB] {
op_id {
term: 1
index: 4
}
hybrid_time: 6434896229301719040
history_cutoff: 18446744073709551614
}
}
I1014 02:19:47.578933 22 tablet.cc:2301] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Checkpoint restored from /home/yugabyte/share/initial_sys_catalog_snapshot/rocksdb
I1014 02:19:47.578951 22 tablet.cc:2302] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Sequence numbers: old=1125899906842628, restored=1125899908336064
I1014 02:19:47.578989 22 tablet.cc:2305] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e: Re-enabling compactions
I1014 02:19:47.579681 22 db_impl.cc:2452] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: SetOptions() on column family [default], inputs:
I1014 02:19:47.579705 22 db_impl.cc:2455] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_stop_writes_trigger: 48
I1014 02:19:47.579710 22 db_impl.cc:2455] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_slowdown_writes_trigger: 24
I1014 02:19:47.579713 22 db_impl.cc:2460] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] SetOptions succeeded
I1014 02:19:47.579718 22 mutable_cf_options.cc:83] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: write_buffer_size: 134217728
I1014 02:19:47.579722 22 mutable_cf_options.cc:85] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_write_buffer_number: 2
I1014 02:19:47.579726 22 mutable_cf_options.cc:87] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: arena_block_size: 131072
I1014 02:19:47.579730 22 mutable_cf_options.cc:89] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_bits: 0
I1014 02:19:47.579733 22 mutable_cf_options.cc:91] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_probes: 6
I1014 02:19:47.579737 22 mutable_cf_options.cc:93] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_huge_page_tlb_size: 0
I1014 02:19:47.579741 22 mutable_cf_options.cc:95] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_successive_merges: 0
I1014 02:19:47.579744 22 mutable_cf_options.cc:97] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: filter_deletes: 0
I1014 02:19:47.579748 22 mutable_cf_options.cc:99] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: disable_auto_compactions: 1
I1014 02:19:47.579751 22 mutable_cf_options.cc:101] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: soft_pending_compaction_bytes_limit: 0
I1014 02:19:47.579756 22 mutable_cf_options.cc:103] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: hard_pending_compaction_bytes_limit: 0
I1014 02:19:47.579759 22 mutable_cf_options.cc:105] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_file_num_compaction_trigger: 5
I1014 02:19:47.579763 22 mutable_cf_options.cc:107] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_slowdown_writes_trigger: 24
I1014 02:19:47.579766 22 mutable_cf_options.cc:109] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_stop_writes_trigger: 48
I1014 02:19:47.579771 22 mutable_cf_options.cc:111] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_grandparent_overlap_factor: 10
I1014 02:19:47.579774 22 mutable_cf_options.cc:113] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: expanded_compaction_factor: 25
I1014 02:19:47.579778 22 mutable_cf_options.cc:115] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: source_compaction_factor: 1
I1014 02:19:47.579782 22 mutable_cf_options.cc:117] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_base: 2097152
I1014 02:19:47.579785 22 mutable_cf_options.cc:119] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_multiplier: 1
I1014 02:19:47.579789 22 mutable_cf_options.cc:121] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_base: 10485760
I1014 02:19:47.579792 22 mutable_cf_options.cc:123] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier: 10
I1014 02:19:47.579798 22 mutable_cf_options.cc:131] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier_additional: 1, 1, 1, 1, 1, 1, 1
I1014 02:19:47.579802 22 mutable_cf_options.cc:133] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: verify_checksums_in_compaction: 1
I1014 02:19:47.579805 22 mutable_cf_options.cc:135] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_sequential_skip_in_iterations: 8
I1014 02:19:47.580298 22 db_impl.cc:2452] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: SetOptions() on column family [default], inputs:
I1014 02:19:47.580319 22 db_impl.cc:2455] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: disable_auto_compactions: false
I1014 02:19:47.580324 22 db_impl.cc:2460] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: [default] SetOptions succeeded
I1014 02:19:47.580328 22 mutable_cf_options.cc:83] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: write_buffer_size: 134217728
I1014 02:19:47.580332 22 mutable_cf_options.cc:85] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_write_buffer_number: 2
I1014 02:19:47.580337 22 mutable_cf_options.cc:87] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: arena_block_size: 131072
I1014 02:19:47.580339 22 mutable_cf_options.cc:89] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_bits: 0
I1014 02:19:47.580343 22 mutable_cf_options.cc:91] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_probes: 6
I1014 02:19:47.580346 22 mutable_cf_options.cc:93] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: memtable_prefix_bloom_huge_page_tlb_size: 0
I1014 02:19:47.580349 22 mutable_cf_options.cc:95] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_successive_merges: 0
I1014 02:19:47.580353 22 mutable_cf_options.cc:97] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: filter_deletes: 0
I1014 02:19:47.580356 22 mutable_cf_options.cc:99] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: disable_auto_compactions: 0
I1014 02:19:47.580360 22 mutable_cf_options.cc:101] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: soft_pending_compaction_bytes_limit: 0
I1014 02:19:47.580363 22 mutable_cf_options.cc:103] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: hard_pending_compaction_bytes_limit: 0
I1014 02:19:47.580368 22 mutable_cf_options.cc:105] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_file_num_compaction_trigger: 5
I1014 02:19:47.580371 22 mutable_cf_options.cc:107] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_slowdown_writes_trigger: 24
I1014 02:19:47.580376 22 mutable_cf_options.cc:109] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: level0_stop_writes_trigger: 48
I1014 02:19:47.580381 22 mutable_cf_options.cc:111] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_grandparent_overlap_factor: 10
I1014 02:19:47.580387 22 mutable_cf_options.cc:113] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: expanded_compaction_factor: 25
I1014 02:19:47.580399 22 mutable_cf_options.cc:115] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: source_compaction_factor: 1
I1014 02:19:47.580411 22 mutable_cf_options.cc:117] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_base: 2097152
I1014 02:19:47.580416 22 mutable_cf_options.cc:119] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: target_file_size_multiplier: 1
I1014 02:19:47.580422 22 mutable_cf_options.cc:121] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_base: 10485760
I1014 02:19:47.580435 22 mutable_cf_options.cc:123] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier: 10
I1014 02:19:47.580461 22 mutable_cf_options.cc:131] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_bytes_for_level_multiplier_additional: 1, 1, 1, 1, 1, 1, 1
I1014 02:19:47.580468 22 mutable_cf_options.cc:133] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: verify_checksums_in_compaction: 1
I1014 02:19:47.580472 22 mutable_cf_options.cc:135] T 00000000000000000000000000000000 P 5e4ad9f98611428a84ecfdbd7f0c100e [R]: max_sequential_skip_in_iterations: 8
I1014 02:19:48.100263 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:48.192000 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
W1014 02:19:48.367630 23 scoped_leader_shared_lock.cc:113] Long lock of catalog manager: 0.108s
@ 0x7fcc8fdce9b5 yb::master::ScopedLeaderSharedLock::Unlock()
@ 0x7fcc8fe09181 yb::master::ScopedLeaderSharedLock::~ScopedLeaderSharedLock()
@ 0x7fcc8fe061b7 yb::master::MasterServiceImpl::GetMasterRegistration()
@ 0x7fcc8a7242da yb::master::MasterServiceIf::Handle()
@ 0x7fcc888cb0d1 yb::rpc::ServicePoolImpl::Handle()
@ 0x7fcc88877764 yb::rpc::InboundCall::InboundCallTask::Run()
@ 0x7fcc888d6c58 yb::rpc::(anonymous namespace)::Worker::Execute()
@ 0x7fcc86edea19 yb::Thread::SuperviseThread()
@ 0x7fcc828e2694 start_thread
@ 0x7fcc8201f41d __clone
@ (nil) (unknown)
I1014 02:19:48.383733 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:48.650916 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:48.733453 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:48.975792 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:49.356822 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:49.732519 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:50.140434 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:50.244349 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:50.347470 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:50.415665 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:51.016983 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:51.074631 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:51.113437 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:51.626413 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:51.743175 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:52.222343 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:52.537614 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:52.646315 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:52.649601 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:53.220798 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:53.275940 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:53.314270 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:54.194931 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:54.294353 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:54.382513 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:54.697398 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:54.849370 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:54.993865 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:55.304800 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:55.630492 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:56.218816 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:56.309460 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:56.336266 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:56.427390 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:56.911339 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:57.083724 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:57.089184 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:57.549160 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:57.929956 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:58.282618 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:58.518831 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:58.682827 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:58.698531 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:59.250856 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:59.288343 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:19:59.328011 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:00.238929 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:00.355041 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:00.365788 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:00.747385 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:00.862927 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:00.871279 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:01.295617 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
W1014 02:20:01.360783 8 log.cc:700] Time spent Fsync log took a long time: real 0.091s user 0.000s sys 0.000s
I1014 02:20:01.585094 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:02.112592 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:02.318334 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:02.407529 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:02.510390 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:02.873728 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:03.093276 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:03.125694 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:03.467535 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:03.950197 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:04.322407 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:04.459998 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:04.598043 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:04.711939 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:05.105959 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:05.227303 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:05.360468 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:06.302959 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:06.375846 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:06.434839 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:06.784123 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:06.796399 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:07.006439 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:07.364178 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:07.471524 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:07.963194 22 server_base.cc:477] Using private ip address yb-master-0.yb-masters.technology-system.svc.cluster.local
I1014 02:20:08.014134 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:08.412242 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:08.458370 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:08.500643 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:08.845576 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:09.121793 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:09.158830 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:09.447141 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:10.035974 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:10.354554 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:10.522806 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:10.627863 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:10.648483 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:11.192811 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:11.218557 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:11.463382 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:12.290146 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:12.329453 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:12.369850 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:12.710752 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:12.923214 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:12.994632 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:13.326561 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:13.478986 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:14.057564 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:14.340159 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:14.458665 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:14.511149 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:14.957303 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:15.143319 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:15.230013 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:15.380318 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:16.145100 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:16.305234 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:16.545560 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:16.586695 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:16.676317 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:17.019945 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:17.266316 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:17.566283 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:18.370898 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:18.372334 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:18.502125 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:18.787274 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:18.884402 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:18.937429 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:19.392968 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:19.581449 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:20.018061 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:20.416575 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:20.460562 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:20.568831 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:20.873035 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:20.956924 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:21.273949 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:21.501477 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:22.090188 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:22.381824 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:22.611632 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:22.627904 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:22.788343 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
I1014 02:20:23.077337 28 webserver.cc:278] Webserver: error reading: Connection reset by peer
No such error on Minikube but appear on Alicloud only, I haven't added any health check.
Interesting! It is possible that Alicloud k8s offering somehow tries to automatically ping certain endpoints on the pods? cc @ramkumarvs any clue?
As for the log spew itself, this sounds in line with something external contacting the server with invalid requests... We probably should also do a better job at our webserver layer, to maybe add some typical error log info, like which endpoint is being hit, what type of request, etc
@bmatican I am also curious the log is generated as a client which getting the error from a target or is generated as a server which logging these errors when processing the requests...
if I can know who calling such request may help to solve this issue.
thank you
Sorry @chris-ng-scmp , must have missed your reply as this was so close to mine :) The logs you see are YB server side logs.
As for fixing this, on the one hand, other than some log spew, I would hope they are pretty harmless (please correct me if you're actually experiencing any DB level issues because of this!).
On the other hand, as I mentioned above, I agree that it would be worth adding some more server-side logging on our end, to be able to at least record who is sending us these requests and maybe what endpoints are hit, etc...
FYI: this error is basically coming from https://github.com/cloudera/squeasel/blob/8ac777a122fccf0358cb8562e900f8e9edd9ed11/squeasel.c#L1067, indicating the remote end hung up. I think this is pretty common for load balancer health checks in the cloud.
Jira Link: DB-2477 Hello Team, I have followed the deployment document by Helm here: https://docs.yugabyte.com/latest/deploy/kubernetes/helm-chart/
I am able to access the dashboard and able to run commands including create database / create records in ysql, but the master pod keep showing following log:
No idea how to stop this error, looking for any advice.
Thank you