gluster / gluster-kubernetes

GlusterFS Native Storage Service for Kubernetes
Apache License 2.0
874 stars 390 forks source link

Get https://10.96.0.1:443/api/v1/namespaces/default/pods?labelSelector=glusterfs-node: dial tcp 10.96.0.1:443: i/o timeout #615

Open 2804337402 opened 4 years ago

2804337402 commented 4 years ago

[root@deploy-heketi-7bbff457f6-r5c9f /]# heketi-cli topology load --json=topology.json Creating cluster ... ID: 457df2dc01e8ce93cd28d259a80dd948 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node node-k8s200 ... Unable to create node: New Node doesn't have glusterd running Creating node node-k8s201 ... Unable to create node: New Node doesn't have glusterd running Creating node node-k8s202 ... Unable to create node: New Node doesn't have glusterd running


[root@node-k8s200 mongodb_cluster]# kubectl logs deploy-heketi-7bbff457f6-r5c9f Setting up heketi database No database file found Heketi v9.0.0-1-g57a5f356-release-9 [heketi] INFO 2019/10/05 12:15:05 Loaded kubernetes executor [heketi] INFO 2019/10/05 12:15:05 Volumes per cluster limit is set to default value of 1000 [heketi] INFO 2019/10/05 12:15:05 GlusterFS Application Loaded [heketi] INFO 2019/10/05 12:15:05 Started Node Health Cache Monitor [heketi] INFO 2019/10/05 12:15:05 Started background pending operations cleaner Listening on port 8080 [heketi] INFO 2019/10/05 12:15:15 Starting Node Health Status refresh [heketi] INFO 2019/10/05 12:15:15 Cleaned 0 nodes from health cache [heketi] INFO 2019/10/05 12:17:05 Starting Node Health Status refresh [heketi] INFO 2019/10/05 12:17:05 Cleaned 0 nodes from health cache [heketi] INFO 2019/10/05 12:19:05 Starting Node Health Status refresh [heketi] INFO 2019/10/05 12:19:05 Cleaned 0 nodes from health cache [heketi] INFO 2019/10/05 12:21:05 Starting Node Health Status refresh [heketi] INFO 2019/10/05 12:21:05 Cleaned 0 nodes from health cache [heketi] INFO 2019/10/05 12:23:05 Starting Node Health Status refresh [heketi] INFO 2019/10/05 12:23:05 Cleaned 0 nodes from health cache [negroni] 2019-10-05T12:23:45Z | 200 | 89.798µs | localhost:8080 | GET /clusters [negroni] 2019-10-05T12:23:45Z | 201 | 1.382543ms | localhost:8080 | POST /clusters [cmdexec] INFO 2019/10/05 12:23:45 Check Glusterd service status in node node-k8s200 [kubeexec] ERROR 2019/10/05 12:24:15 heketi/pkg/remoteexec/kube/target.go:134:kube.TargetDaemonSet.GetTargetPod: Get https://10.96.0.1:443/api/v1/namespaces/default/pods?labelSelector=glusterfs-node: dial tcp 10.96.0.1:443: i/o timeout [kubeexec] ERROR 2019/10/05 12:24:15 heketi/pkg/remoteexec/kube/target.go:135:kube.TargetDaemonSet.GetTargetPod: Failed to get list of pods [cmdexec] ERROR 2019/10/05 12:24:15 heketi/executors/cmdexec/peer.go:81:cmdexec.(CmdExecutor).GlusterdCheck: Failed to get list of pods [heketi] ERROR 2019/10/05 12:24:15 heketi/apps/glusterfs/app_node.go:107:glusterfs.(App).NodeAdd: Failed to get list of pods [heketi] ERROR 2019/10/05 12:24:15 heketi/apps/glusterfs/app_node.go:108:glusterfs.(*App).NodeAdd: New Node doesn't have glusterd running [negroni] 2019-10-05T12:23:45Z | 400 | 30.002301916s | localhost:8080 | POST /nodes [cmdexec] INFO 2019/10/05 12:24:15 Check Glusterd service status in node node-k8s201

2804337402 commented 4 years ago

微信图片_20191005223242 微信图片_20191005223324

r1cebank commented 4 years ago

I am having the exact issue. Any help?

mart3051 commented 4 years ago

Me too... Any help is greatly appreciated