Closed ahjing99 closed 4 months ago
When apply all pods one single node, and stop the node, more logs returned:
➜ ~ k logs kubeblocks-7f5fc565cd-zxvmk -n kb-system
Defaulted container "manager" out of: manager, tools (init)
2023-06-07T12:11:11.353Z INFO setup config file: /etc/kubeblocks/config.yaml
2023-06-07T12:11:11.353Z INFO setup config settings: map[alsologtostderr:false backup_pv_configmap_name: backup_pv_configmap_namespace: backup_pvc_create_policy: backup_pvc_init_capacity: backup_pvc_name: backup_pvc_storage_class: cert_dir:/tmp/k8s-webhook-server/serving-certs cm_namespace:kb-system cm_recon_retry_duration_ms:100 config_manager_grpc_port:9901 config_manager_log_level:info data_plane_affinity:{"nodeAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"preference":{"matchExpressions":[{"key":"kb-data","operator":"In","values":["true"]}]},"weight":100}]}} data_plane_tolerations:[{"effect":"NoSchedule","key":"kb-data","operator":"Equal","value":"true"}] enable_debug_sysaccounts:false health_probe_bind_address::8081 kill_container_signal:SIGKILL kubeblocks_addon_helm_install_options:[--atomic --cleanup-on-fail --wait] kubeblocks_addon_helm_uninstall_options:[] kubeblocks_addon_sa_name:kubeblocks-addon-installer kubeblocks_serviceaccount_name:kubeblocks kubeblocks_tools_image:registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.6.0-alpha.13 kubeconfig: leader_elect:true log_backtrace_at::0 log_dir: logtostderr:false maxconcurrentreconciles_addon:8 maxconcurrentreconciles_clusterdef:8 maxconcurrentreconciles_clusterversion:8 maxconcurrentreconciles_dataprotection:8 metrics_bind_address::8080 pod_min_ready_seconds:10 probe_service_grpc_port:50001 probe_service_http_port:3501 probe_service_log_level:info stderrthreshold:2 v:0 vmodule: volumesnapshot:true volumesnapshot_api_beta:true zap_devel:false zap_encoder:console zap_log_level: zap_stacktrace_level: zap_time_encoding:iso8601]
2023-06-07T12:11:41.355Z ERROR Failed to get API Group-Resources {"error": "Get \"https://10.116.0.1:443/api?timeout=32s\": dial tcp 10.116.0.1:443: i/o timeout"}
sigs.k8s.io/controller-runtime/pkg/cluster.New
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/cluster/cluster.go:161
sigs.k8s.io/controller-runtime/pkg/manager.New
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/manager/manager.go:359
main.main
/src/cmd/manager/main.go:226
runtime.main
/usr/local/go/src/runtime/proc.go:250
2023-06-07T12:11:41.356Z ERROR setup unable to start manager {"error": "Get \"https://10.116.0.1:443/api?timeout=32s\": dial tcp 10.116.0.1:443: i/o timeout"}
main.main
/src/cmd/manager/main.go:255
runtime.main
/usr/local/go/src/runtime/proc.go:250
This issue has been marked as stale because it has been open for 30 days with no activity
➜ ~ kbcli version Kubernetes: v1.25.8-gke.500 KubeBlocks: 0.6.0-alpha.13 kbcli: 0.6.0-alpha.13
After stop all nodes of the gke cluster, kubeblocks controller crash