Closed mreddimasi closed 4 years ago
Additional Logs from CSI-Driver
Checked the CSI pod, see the following error, in the error message below it says it cannot find Error: secret "hcloud-csi" not found, is there any secret which needs to be created for this?
λ kubectl describe pod hcloud-csi-controller-0 -n kube-system
Name: hcloud-csi-controller-0
Namespace: kube-system
Priority: 0
Node: xp-worker-2/135.181.25.216
Start Time: Sat, 22 Aug 2020 23:26:39 +0530
Labels: app=hcloud-csi-controller
controller-revision-hash=hcloud-csi-controller-797687565c
statefulset.kubernetes.io/pod-name=hcloud-csi-controller-0
Annotations: <none>
Status: Pending
IP: 10.244.1.6
Controlled By: StatefulSet/hcloud-csi-controller
Containers:
csi-attacher:
Container ID: docker://5452b122d3edfd4b14e90b8030c046b47672b28e475381aa718162d3bef12c7e
Image: quay.io/k8scsi/csi-attacher:v2.2.0
Image ID: docker-pullable://quay.io/k8scsi/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab
Port: <none>
Host Port: <none>
Args:
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
--v=5
State: Running
Started: Sun, 23 Aug 2020 10:43:22 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 00:04:28 +0530
Finished: Sun, 23 Aug 2020 10:43:07 +0530
Ready: True
Restart Count: 2
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
csi-resizer:
Container ID: docker://a87301ba175c8041013413ffa205e732a4eb9dd54d52126f7308ed1a74bab0b6
Image: quay.io/k8scsi/csi-resizer:v0.3.0
Image ID: docker-pullable://quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f
Port: <none>
Host Port: <none>
Args:
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
--v=5
State: Running
Started: Sun, 23 Aug 2020 10:43:22 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 00:04:28 +0530
Finished: Sun, 23 Aug 2020 10:43:07 +0530
Ready: True
Restart Count: 2
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
csi-provisioner:
Container ID: docker://68867e8606198d1200f6a1316d9092d9b70240edaa05cb013ae6ef9fad44a582
Image: quay.io/k8scsi/csi-provisioner:v1.6.0
Image ID: docker-pullable://quay.io/k8scsi/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679
Port: <none>
Host Port: <none>
Args:
--provisioner=csi.hetzner.cloud
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
--feature-gates=Topology=true
--v=5
State: Running
Started: Sun, 23 Aug 2020 10:43:22 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 00:04:28 +0530
Finished: Sun, 23 Aug 2020 10:43:07 +0530
Ready: True
Restart Count: 2
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
hcloud-csi-driver:
Container ID:
Image: hetznercloud/hcloud-csi-driver:1.4.0
Image ID:
Ports: 9189/TCP, 9808/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CreateContainerConfigError
Ready: False
Restart Count: 0
Liveness: http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5
Environment:
CSI_ENDPOINT: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
METRICS_ENDPOINT: 0.0.0.0:9189
HCLOUD_TOKEN: <set to the key 'token' in secret 'hcloud-csi'> Optional: false
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
liveness-probe:
Container ID: docker://6f6c62613649e40add8e651f8e493b148520521a034b4cac346456d3b883abed
Image: quay.io/k8scsi/livenessprobe:v1.1.0
Image ID: docker-pullable://quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5
Port: <none>
Host Port: <none>
Args:
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
State: Running
Started: Sun, 23 Aug 2020 10:43:27 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 00:04:33 +0530
Finished: Sun, 23 Aug 2020 10:43:07 +0530
Ready: True
Restart Count: 2
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
socket-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
hcloud-csi-token-r4tgh:
Type: Secret (a volume populated by a Secret)
SecretName: hcloud-csi-token-r4tgh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 11h (x9 over 11h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 11h (x3 over 11h) default-scheduler 0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-read
y: }, that the pod didn't tolerate.
Normal Scheduled 11h default-scheduler Successfully assigned kube-system/hcloud-csi-controller-0 to xp-worker-2
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/csi-attacher:v2.2.0"
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/csi-attacher:v2.2.0"
Normal Created 11h kubelet, xp-worker-2 Created container csi-attacher
Normal Started 11h kubelet, xp-worker-2 Started container csi-attacher
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/csi-resizer:v0.3.0"
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/csi-resizer:v0.3.0"
Normal Created 11h kubelet, xp-worker-2 Created container csi-resizer
Normal Started 11h kubelet, xp-worker-2 Started container csi-resizer
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/csi-provisioner:v1.6.0"
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.6.0"
Normal Started 11h kubelet, xp-worker-2 Started container csi-provisioner
Normal Created 11h kubelet, xp-worker-2 Created container csi-provisioner
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Started 11h kubelet, xp-worker-2 Started container liveness-probe
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Created 11h kubelet, xp-worker-2 Created container liveness-probe
Normal Pulled 11h (x3 over 11h) kubelet, xp-worker-2 Successfully pulled image "hetznercloud/hcloud-csi-driver:1.4.0"
Warning Failed 11h (x3 over 11h) kubelet, xp-worker-2 Error: secret "hcloud-csi" not found
Normal Pulling 11h (x84 over 11h) kubelet, xp-worker-2 Pulling image "hetznercloud/hcloud-csi-driver:1.4.0"
Warning FailedCreatePodSandBox 10h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "30b7a66f094a4dbe3f1a1f39ef2c27dd3dba80fd8f558d887b4870b03a22
03be" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d6248aac84fdecbb5aa2aa3b31df1078f2e2984c1445b4d3d28176ce62f2
b9de" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dea8f82082b8a51e4cdb3d59bb0546e73920cf5b822fe815ce698f085116
a872" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 10h (x5 over 10h) kubelet, xp-worker-2 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 10h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c58e789aa325f0a4cbe7db87e21acc328c2e3c8d82c766ce015627d5e98c
4ce9" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal Pulling 10h kubelet, xp-worker-2 Pulling image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Created 10h kubelet, xp-worker-2 Created container csi-attacher
Normal Started 10h kubelet, xp-worker-2 Started container csi-attacher
Normal Pulled 10h kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-resizer:v0.3.0" already present on machine
Normal Pulled 10h kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-attacher:v2.2.0" already present on machine
Normal Started 10h kubelet, xp-worker-2 Started container csi-resizer
Normal Pulled 10h kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-provisioner:v1.6.0" already present on machine
Normal Created 10h kubelet, xp-worker-2 Created container csi-provisioner
Normal Started 10h kubelet, xp-worker-2 Started container csi-provisioner
Normal Created 10h kubelet, xp-worker-2 Created container csi-resizer
Normal Pulled 10h kubelet, xp-worker-2 Successfully pulled image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Pulling 10h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Pulled 10h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Created 10h kubelet, xp-worker-2 Created container liveness-probe
Normal Started 10h kubelet, xp-worker-2 Started container liveness-probe
Warning Failed 10h (x23 over 10h) kubelet, xp-worker-2 Error: secret "hcloud-csi" not found
Warning FailedCreatePodSandBox 14m kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "10bbb2b9f399921267d141e26cabe8315d86eb3f1c2c69cc1ca371323a32
5e08" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 14m kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "481c78230ebf0b69bb63b9c97fbc299b2a2006f0ed8afd5cf1e28bb8251e
21f4" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 14m kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b475b66899f4f53489cb22d8387ca705fb9f26db6bfc48c4f120698e713a
ffb3" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 14m (x4 over 14m) kubelet, xp-worker-2 Pod sandbox changed, it will be killed and re-created.
Normal Pulled 14m kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-attacher:v2.2.0" already present on machine
Normal Created 14m kubelet, xp-worker-2 Created container csi-attacher
Normal Created 14m kubelet, xp-worker-2 Created container csi-provisioner
Normal Started 14m kubelet, xp-worker-2 Started container csi-attacher
Normal Created 14m kubelet, xp-worker-2 Created container csi-resizer
Normal Started 14m kubelet, xp-worker-2 Started container csi-resizer
Normal Pulled 14m kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-provisioner:v1.6.0" already present on machine
Normal Pulled 14m kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-resizer:v0.3.0" already present on machine
Normal Started 14m kubelet, xp-worker-2 Started container csi-provisioner
Normal Pulled 14m kubelet, xp-worker-2 Successfully pulled image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Pulling 14m kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Pulled 14m kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Started 14m kubelet, xp-worker-2 Started container liveness-probe
Normal Created 14m kubelet, xp-worker-2 Created container liveness-probe
Warning Failed 14m (x2 over 14m) kubelet, xp-worker-2 Error: secret "hcloud-csi" not found
Normal Pulling 4m48s (x45 over 14m) kubelet, xp-worker-2 Pulling image "hetznercloud/hcloud-csi-driver:1.4.0"
Created an hcloud-csi secret in kube-system, but the csi is still having issues
λ kubectl describe pod hcloud-csi-controller-0 -n kube-system
Name: hcloud-csi-controller-0
Namespace: kube-system
Priority: 0
Node: xp-worker-2/135.181.25.216
Start Time: Sat, 22 Aug 2020 23:26:39 +0530
Labels: app=hcloud-csi-controller
controller-revision-hash=hcloud-csi-controller-797687565c
statefulset.kubernetes.io/pod-name=hcloud-csi-controller-0
Annotations: <none>
Status: Running
IP: 10.244.1.9
Controlled By: StatefulSet/hcloud-csi-controller
Containers:
csi-attacher:
Container ID: docker://70ab1a38b9f9ad0f9fc6f5fe2f98a2548ad3d16068c7030f69cc0d8656244e68
Image: quay.io/k8scsi/csi-attacher:v2.2.0
Image ID: docker-pullable://quay.io/k8scsi/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab
Port: <none>
Host Port: <none>
Args:
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
--v=5
State: Running
Started: Sun, 23 Aug 2020 11:12:01 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 10:43:22 +0530
Finished: Sun, 23 Aug 2020 11:11:49 +0530
Ready: True
Restart Count: 3
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
csi-resizer:
Container ID: docker://d7d625496148c8b4c58dcc85c606a9940e49aa2c45dc7ff4f124ebe733cd1105
Image: quay.io/k8scsi/csi-resizer:v0.3.0
Image ID: docker-pullable://quay.io/k8scsi/csi-resizer@sha256:eff2d6a215efd9450d90796265fc4d8832a54a3a098df06edae6ff3a5072b08f
Port: <none>
Host Port: <none>
Args:
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
--v=5
State: Running
Started: Sun, 23 Aug 2020 11:12:02 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 10:43:22 +0530
Finished: Sun, 23 Aug 2020 11:11:49 +0530
Ready: True
Restart Count: 3
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
csi-provisioner:
Container ID: docker://4880c9f9a6ac7e26d087c15061bf9bb76a6491de6861e1a4fd5610d39d5952fe
Image: quay.io/k8scsi/csi-provisioner:v1.6.0
Image ID: docker-pullable://quay.io/k8scsi/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679
Port: <none>
Host Port: <none>
Args:
--provisioner=csi.hetzner.cloud
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
--feature-gates=Topology=true
--v=5
State: Running
Started: Sun, 23 Aug 2020 11:12:02 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 10:43:22 +0530
Finished: Sun, 23 Aug 2020 11:11:49 +0530
Ready: True
Restart Count: 3
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
hcloud-csi-driver:
Container ID: docker://8c9b474ad5cc68a26a4e92efaacda31fdd28365b696176fd8e2477d394e314fe
Image: hetznercloud/hcloud-csi-driver:1.4.0
Image ID: docker-pullable://hetznercloud/hcloud-csi-driver@sha256:c467ed090406e32c1ac733ea70dce9efce53751954e217bb4c77044c7c394e0c
Ports: 9189/TCP, 9808/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sun, 23 Aug 2020 11:13:42 +0530
Finished: Sun, 23 Aug 2020 11:13:42 +0530
Ready: False
Restart Count: 8
Liveness: http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5
Environment:
CSI_ENDPOINT: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
METRICS_ENDPOINT: 0.0.0.0:9189
HCLOUD_TOKEN: <set to the key 'token' in secret 'hcloud-csi'> Optional: false
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
liveness-probe:
Container ID: docker://766e23e0afbcc24dd730e727674c862d83a7f9175a343b6322a948207ff82f05
Image: quay.io/k8scsi/livenessprobe:v1.1.0
Image ID: docker-pullable://quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5
Port: <none>
Host Port: <none>
Args:
--csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock
State: Running
Started: Sun, 23 Aug 2020 11:12:07 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Sun, 23 Aug 2020 10:43:27 +0530
Finished: Sun, 23 Aug 2020 11:11:49 +0530
Ready: True
Restart Count: 3
Environment: <none>
Mounts:
/var/lib/csi/sockets/pluginproxy/ from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hcloud-csi-token-r4tgh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
socket-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
hcloud-csi-token-r4tgh:
Type: Secret (a volume populated by a Secret)
SecretName: hcloud-csi-token-r4tgh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 11h (x9 over 11h) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 11h (x3 over 11h) default-scheduler 0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Normal Scheduled 11h default-scheduler Successfully assigned kube-system/hcloud-csi-controller-0 to xp-worker-2
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/csi-attacher:v2.2.0"
Normal Started 11h kubelet, xp-worker-2 Started container csi-attacher
Normal Created 11h kubelet, xp-worker-2 Created container csi-attacher
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/csi-resizer:v0.3.0"
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/csi-attacher:v2.2.0"
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/csi-resizer:v0.3.0"
Normal Created 11h kubelet, xp-worker-2 Created container csi-resizer
Normal Started 11h kubelet, xp-worker-2 Started container csi-resizer
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/csi-provisioner:v1.6.0"
Normal Started 11h kubelet, xp-worker-2 Started container csi-provisioner
Normal Created 11h kubelet, xp-worker-2 Created container csi-provisioner
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.6.0"
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Started 11h kubelet, xp-worker-2 Started container liveness-probe
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Created 11h kubelet, xp-worker-2 Created container liveness-probe
Normal Pulled 11h (x3 over 11h) kubelet, xp-worker-2 Successfully pulled image "hetznercloud/hcloud-csi-driver:1.4.0"
Warning Failed 11h (x3 over 11h) kubelet, xp-worker-2 Error: secret "hcloud-csi" not found
Normal Pulling 11h (x84 over 11h) kubelet, xp-worker-2 Pulling image "hetznercloud/hcloud-csi-driver:1.4.0"
Warning FailedCreatePodSandBox 11h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "30b7a66f094a4dbe3f1a1f39ef2c27dd3dba80fd8f558d887b4870b03a2203be" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 11h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d6248aac84fdecbb5aa2aa3b31df1078f2e2984c1445b4d3d28176ce62f2b9de" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 11h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dea8f82082b8a51e4cdb3d59bb0546e73920cf5b822fe815ce698f085116a872" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 11h (x5 over 11h) kubelet, xp-worker-2 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 11h kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c58e789aa325f0a4cbe7db87e21acc328c2e3c8d82c766ce015627d5e98c4ce9" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal Pulled 11h kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-attacher:v2.2.0" already present on machine
Normal Created 11h kubelet, xp-worker-2 Created container csi-attacher
Normal Started 11h kubelet, xp-worker-2 Started container csi-attacher
Normal Pulled 11h kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-resizer:v0.3.0" already present on machine
Normal Created 11h kubelet, xp-worker-2 Created container csi-resizer
Normal Started 11h kubelet, xp-worker-2 Started container csi-resizer
Normal Pulled 11h kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-provisioner:v1.6.0" already present on machine
Normal Created 11h kubelet, xp-worker-2 Created container csi-provisioner
Normal Started 11h kubelet, xp-worker-2 Started container csi-provisioner
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Pulling 11h kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Pulled 11h kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Created 11h kubelet, xp-worker-2 Created container liveness-probe
Normal Started 11h kubelet, xp-worker-2 Started container liveness-probe
Warning Failed 11h (x23 over 11h) kubelet, xp-worker-2 Error: secret "hcloud-csi" not found
Warning FailedCreatePodSandBox 31m kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "10bbb2b9f399921267d141e26cabe8315d86eb3f1c2c69cc1ca371323a325e08" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 31m kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "481c78230ebf0b69bb63b9c97fbc299b2a2006f0ed8afd5cf1e28bb8251e21f4" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 31m kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b475b66899f4f53489cb22d8387ca705fb9f26db6bfc48c4f120698e713affb3" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal Created 31m kubelet, xp-worker-2 Created container csi-attacher
Normal Pulled 31m kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-attacher:v2.2.0" already present on machine
Normal SandboxChanged 31m (x4 over 31m) kubelet, xp-worker-2 Pod sandbox changed, it will be killed and re-created.
Normal Started 31m kubelet, xp-worker-2 Started container csi-provisioner
Normal Pulled 31m kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-resizer:v0.3.0" already present on machine
Normal Created 31m kubelet, xp-worker-2 Created container csi-resizer
Normal Started 31m kubelet, xp-worker-2 Started container csi-resizer
Normal Pulled 31m kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-provisioner:v1.6.0" already present on machine
Normal Created 31m kubelet, xp-worker-2 Created container csi-provisioner
Normal Started 31m kubelet, xp-worker-2 Started container csi-attacher
Normal Pulled 31m kubelet, xp-worker-2 Successfully pulled image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Pulling 31m kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Pulled 31m kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Created 31m kubelet, xp-worker-2 Created container liveness-probe
Normal Started 31m kubelet, xp-worker-2 Started container liveness-probe
Warning Failed 31m (x2 over 31m) kubelet, xp-worker-2 Error: secret "hcloud-csi" not found
Normal Pulling 11m (x86 over 31m) kubelet, xp-worker-2 Pulling image "hetznercloud/hcloud-csi-driver:1.4.0"
Warning BackOff 6m20s (x10 over 7m36s) kubelet, xp-worker-2 Back-off restarting failed container
Warning FailedMount 2m43s kubelet, xp-worker-2 MountVolume.SetUp failed for volume "hcloud-csi-token-r4tgh" : failed to sync secret cache: timed out waiting for the condition
Warning FailedCreatePodSandBox 2m41s kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8f98df7fa5a34b48018656b0216c780ee2cf7b0a2ebd1baca54a9860f82a2359" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 2m39s (x3 over 2m42s) kubelet, xp-worker-2 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 2m39s kubelet, xp-worker-2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6e82c6a91ef9e7fa6c8ac97c5d05fb8785728a00f4f05e0e480ad2d8f68ed34e" network for pod "hcloud-csi-controller-0": networkPlugin cni failed to set up pod "hcloud-csi-controller-0_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Normal Created 2m38s kubelet, xp-worker-2 Created container csi-attacher
Normal Started 2m38s kubelet, xp-worker-2 Started container csi-attacher
Normal Pulled 2m38s kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-resizer:v0.3.0" already present on machine
Normal Created 2m38s kubelet, xp-worker-2 Created container csi-resizer
Normal Pulled 2m38s kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-attacher:v2.2.0" already present on machine
Normal Started 2m37s kubelet, xp-worker-2 Started container csi-provisioner
Normal Started 2m37s kubelet, xp-worker-2 Started container csi-resizer
Normal Pulled 2m37s kubelet, xp-worker-2 Container image "quay.io/k8scsi/csi-provisioner:v1.6.0" already present on machine
Normal Pulling 2m37s kubelet, xp-worker-2 Pulling image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Created 2m37s kubelet, xp-worker-2 Created container csi-provisioner
Normal Pulled 2m35s kubelet, xp-worker-2 Successfully pulled image "hetznercloud/hcloud-csi-driver:1.4.0"
Normal Started 2m35s kubelet, xp-worker-2 Started container hcloud-csi-driver
Normal Pulling 2m35s kubelet, xp-worker-2 Pulling image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Created 2m35s kubelet, xp-worker-2 Created container hcloud-csi-driver
Normal Created 2m33s kubelet, xp-worker-2 Created container liveness-probe
Normal Pulled 2m33s kubelet, xp-worker-2 Successfully pulled image "quay.io/k8scsi/livenessprobe:v1.1.0"
Normal Started 2m32s kubelet, xp-worker-2 Started container liveness-probe
Warning BackOff 2m31s (x2 over 2m32s) kubelet, xp-worker-2 Back-off restarting failed container
LoadBalancer issue: kubectl -n kube-system logs hcloud-cloud-controller-manager-565849f78f-zdkkd
?
CSI issue: kubectl -n kube-system logs hcloud-csi-controller-0
?
kubectl logs hcloud-cloud-controller-manager-565849f78f-zdkkd -n kube-system
λ kubectl logs hcloud-cloud-controller-manager-565849f78f-zdkkd -n kube-system
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I0823 12:08:00.576722 1 serving.go:313] Generated self-signed cert in-memory
W0823 12:08:01.324974 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0823 12:08:01.343067 1 controllermanager.go:120] Version: v0.0.0-master+$Format:%h$
I0823 12:08:01.343167 1 cloud.go:90] "%s: %s empty" hcloud/newCloud="HCLOUD_NETWORK"
Hetzner Cloud k8s cloud controller v1.7.0 started
W0823 12:08:02.067615 1 controllermanager.go:132] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
I0823 12:08:02.071175 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0823 12:08:02.072115 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0823 12:08:02.072382 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0823 12:08:02.072477 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0823 12:08:02.077111 1 secure_serving.go:178] Serving securely on [::]:10258
I0823 12:08:02.081888 1 node_controller.go:110] Sending events to api server.
I0823 12:08:02.082444 1 controllermanager.go:247] Started "cloud-node"
I0823 12:08:02.086334 1 node_lifecycle_controller.go:78] Sending events to api server
I0823 12:08:02.086672 1 controllermanager.go:247] Started "cloud-node-lifecycle"
I0823 12:08:02.089359 1 controllermanager.go:247] Started "service"
I0823 12:08:02.090510 1 core.go:101] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0823 12:08:02.090699 1 controllermanager.go:244] Skipping "route"
I0823 12:08:02.091368 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0823 12:08:02.097947 1 controller.go:208] Starting service controller
I0823 12:08:02.098147 1 shared_informer.go:223] Waiting for caches to sync for service
I0823 12:08:02.172438 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0823 12:08:02.172769 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0823 12:08:02.198886 1 shared_informer.go:230] Caches are synced for service
I0823 12:08:02.199450 1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx" nodes=[xp-worker-1 xp-worker-2]
I0823 12:08:02.204863 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
E0823 12:08:02.536620 1 controller.go:244] error processing service default/nginx (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:08:02.537943 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
E0823 12:08:02.760895 1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 7293612
E0823 12:08:03.237748 1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 7293376
E0823 12:08:03.574494 1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 7293609
I0823 12:08:07.538269 1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx" nodes=[xp-worker-1 xp-worker-2]
I0823 12:08:07.552886 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
E0823 12:08:07.699096 1 controller.go:244] error processing service default/nginx (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:08:07.700210 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:08:17.700337 1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx" nodes=[xp-worker-1 xp-worker-2]
I0823 12:08:17.702396 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
E0823 12:08:17.852928 1 controller.go:244] error processing service default/nginx (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:08:17.855440 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:08:37.854372 1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx" nodes=[xp-worker-1 xp-worker-2]
I0823 12:08:37.858883 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
E0823 12:08:37.999482 1 controller.go:244] error processing service default/nginx (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:08:37.999761 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:09:18.001285 1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx" nodes=[xp-worker-2 xp-worker-1]
I0823 12:09:18.006536 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
E0823 12:09:18.121561 1 controller.go:244] error processing service default/nginx (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
I0823 12:09:18.122354 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx", UID:"f93e9cce-5825-4ecb-a180-b2f8c25ef1ed", APIVersion:"v1", ResourceVersion:"7118", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
E0823 12:09:18.121561 1 controller.go:244] error processing service default/nginx (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
You need to set some annotations to your LoadBalancer to let the hcloud-cloud-controller-manager let you know where to create the Hetzner Load Balancer instance.
λ kubectl logs hcloud-csi-controller-0 hcloud-csi-driver -n kube-system
level=error ts=2020-08-23T12:11:09.656802874Z msg="entered token is invalid (must be exactly 64 characters long)"
I've entered the API token for this as follows
apiVersion: v1
kind: Secret
metadata:
name: hcloud-csi
namespace: kube-system
stringData:
token: <API-TOKEN>
E0823 12:09:18.121561 1 controller.go:244] error processing service default/nginx (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
You need to set some annotations to your LoadBalancer to let the hcloud-cloud-controller-manager let you know where to create the Hetzner Load Balancer instance.
Thank you so much @MatthiasLohr, So this would be in the service deployment file correct?
- Please use proper formatting here. Otherwise it's hard to detect syntax problems.
- If you really added exactly this secret, than you added an empty secret. You need to add a secret containing your actual API token.
Sorry about the formatting, I've corrected it, but did add the correct api token which was generated in the Hetzner console. I had created the hcloud context also using that token itself.
Yep. See https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/master/docs/load_balancers.md.
Will try that out. Thanks again @MatthiasLohr
I've applied the following yaml file
apiVersion: v1
kind: Service
metadata:
name: nginx-service
annotations:
load-balancer.hetzner.cloud/location: hel1
#load-balancer.hetzner.cloud/use-private-ip: "true"
#load-balancer.hetzner.cloud/network: 138627
load-balancer.hetzner.cloud/name: "nginx-lb-1"
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
logs -f of
kubectl logs -f hcloud-cloud-controller-manager-565849f78f-zdkkd -n kube-system
E0823 13:03:15.668206 1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 7293376
E0823 13:03:15.994346 1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 7293609
E0823 13:03:16.289128 1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 7293612
I0823 13:04:08.774435 1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx-service" nodes=[xp-worker-1 xp-worker-2]
I0823 13:04:08.783038 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"2d0f0471-f32a-43f2-ab30-522370b6a9b9", APIVersion:"v1", ResourceVersion:"21591", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
E0823 13:04:08.932496 1 controller.go:244] error processing service default/nginx-service (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://:
I0823 13:04:08.933003 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"2d0f0471-f32a-43f2-ab30-522370b6a9b9", APIVersion:"v1", ResourceVersion:"21591", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://:
Uncommented the network id label in the service yaml file, but even though I've specified the location I get the error in the controller
apiVersion: v1
kind: Service
metadata:
name: nginx-service
annotations:
load-balancer.hetzner.cloud/location: hel1
load-balancer.hetzner.cloud/use-private-ip: "true"
load-balancer.hetzner.cloud/network: 138627
load-balancer.hetzner.cloud/name: nginx-lb-1
#load-balancer.hetzner.cloud/network-zone: eu-central
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
E0823 13:29:16.899589 1 controller.go:244] error processing service default/nginx-service (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
- Please use proper formatting here. Otherwise it's hard to detect syntax problems.
- If you really added exactly this secret, than you added an empty secret. You need to add a secret containing your actual API token.
@MatthiasLohr you were correct, there was a problem with API Token which I had configured, there was white-space in it. Once I corrected that, the csi-controller is now in running state. Thanks for your help again :)
E0823 13:29:16.899589 1 controller.go:244] error processing service default/nginx-service (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
kubectl describe nginx-service
?
I've specified the annotations in ngnix service, for some reason its not showing up
λ more ngnix-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
annotations:
load-balancer.hetzner.cloud/location: hel1
load-balancer.hetzner.cloud/use-private-ip: "true"
load-balancer.hetzner.cloud/network: 138627
load-balancer.hetzner.cloud/name: nginx-lb-1
#load-balancer.hetzner.cloud/network-zone: eu-central
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
λ kubectl describe service nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-service","namespace":"default"},"spec":{"ports":[{"port":80...
Selector: app=nginx
Type: LoadBalancer
IP: 10.107.236.58
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31900/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SyncLoadBalancerFailed 4s service-controller Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
Normal EnsuringLoadBalancer 0s (x2 over 4s) service-controller Ensuring load balancer
... recreate it??
... recreate it??
Recreate nginx-service? Yes I did, but the result is the same, as you can see below, none of those annotations appear. Have I missed out any step? Thank you again.
λ kubectl get service nginx-service -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-service","namespace":"default"},"spec":{"ports":[{"port":80,"targetPort":80}],"selector
pp":"nginx"},"type":"LoadBalancer"}}
creationTimestamp: "2020-08-24T03:48:30Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
manager: hcloud-cloud-controller-manager
operation: Update
time: "2020-08-24T03:48:30Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:externalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-08-24T03:48:30Z"
name: nginx-service
namespace: default
resourceVersion: "148710"
selfLink: /api/v1/namespaces/default/services/nginx-service
uid: 000c253a-48ec-47b5-b973-b582a57bfa89
spec:
clusterIP: 10.105.190.65
externalTrafficPolicy: Cluster
ports:
- nodePort: 32140
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
Then double-check your file paths, namespaces, etc. Kubernetes does no annotation filtering, so if they are not there you are either submitting the wrong file to kubernetes or displaying the wrong resource.
Then double-check your file paths, namespaces, etc. Kubernetes does no annotation filtering, so if they are not there you are either submitting the wrong file to kubernetes or displaying the wrong resource.
I uninstalled the service & the deployment file for nginx. I tried installing Ambassador API Gateway using helm the following were the result
λ helm install ambassador datawire/ambassador -f values.yaml -n ambassador
coalesce.go:196: warning: cannot overwrite table with non table for podSecurityPolicy (map[])
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
NAME: ambassador
LAST DEPLOYED: Mon Aug 24 20:02:33 2020
NAMESPACE: ambassador
STATUS: deployed
REVISION: 1
NOTES:
-------------------------------------------------------------------------------
Congratulations! You've successfully installed Ambassador!
-------------------------------------------------------------------------------
To get the IP address of Ambassador, run the following commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w --namespace ambassador ambassador'
On GKE/Azure:
export SERVICE_IP=$(kubectl get svc --namespace ambassador ambassador -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
On AWS:
export SERVICE_IP=$(kubectl get svc --namespace ambassador ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo http://$SERVICE_IP:
For help, visit our Slack at https://d6e.co/slack or view the documentation online at https://www.getambassador.io.
A describe on ambassador service results in the following, question now is I'm configuring network with the ID of the network, should it be name instead?
λ kubectl describe service ambassador -n ambassador
Name: ambassador
Namespace: ambassador
Labels: app.kubernetes.io/component=ambassador-service
app.kubernetes.io/instance=ambassador
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ambassador
app.kubernetes.io/part-of=ambassador
helm.sh/chart=ambassador-6.5.2
product=aes
Annotations: load-balancer.hetzner.cloud/location: hel1
load-balancer.hetzner.cloud/name: ambassador-lb-1
load-balancer.hetzner.cloud/network: 138627
load-balancer.hetzner.cloud/use-private-ip: true
Selector: app.kubernetes.io/instance=ambassador,app.kubernetes.io/name=ambassador
Type: LoadBalancer
IP: 10.99.37.166
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30866/TCP
Endpoints: 10.244.2.14:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31646/TCP
Endpoints: 10.244.2.14:8443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 15s (x3 over 36s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 14s (x3 over 30s) service-controller Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: h
cops/LoadBalancerOps.ReconcileHCLBTargets: use private ip: missing network id
I tried including the name instead of the id also, but result is the same.
λ kubectl describe service ambassador -n ambassador
Name: ambassador
Namespace: ambassador
Labels: app.kubernetes.io/component=ambassador-service
app.kubernetes.io/instance=ambassador
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ambassador
app.kubernetes.io/part-of=ambassador
helm.sh/chart=ambassador-6.5.2
product=aes
Annotations: load-balancer.hetzner.cloud/location: hel1
load-balancer.hetzner.cloud/name: ambassador-lb-1
load-balancer.hetzner.cloud/network: xpresslane-network
load-balancer.hetzner.cloud/use-private-ip: true
Selector: app.kubernetes.io/instance=ambassador,app.kubernetes.io/name=ambassador
Type: LoadBalancer
IP: 10.99.37.166
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30866/TCP
Endpoints: 10.244.2.14:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31646/TCP
Endpoints: 10.244.2.14:8443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 30s (x7 over 4m19s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 30s (x7 over 4m13s) service-controller Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: use private ip: missing network id
Did you read the last line of the output? use private ip: missing network id
. You did not provide a network id to the hcloud-cloud-controller-manager.
Did you read the last line of the output?
use private ip: missing network id
. You did not provide a network id to the hcloud-cloud-controller-manager.
I just was reading up on that, it means I would have to deploy with Network support instead of Basic Deployment like what I've done now. Is that correct?
You configured load-balancer.hetzner.cloud/use-private-ip: true
, but you did not provide a private network.
You configured
load-balancer.hetzner.cloud/use-private-ip: true
, but you did not provide a private network.
I assumed this flag was doing that "load-balancer.hetzner.cloud/network: xpresslane-network" but from the docs it looks like I would have to use hcloud-cloud-controller-manager with Network support, but I used the basic one. Is that correct @MatthiasLohr
Deleted the servers & recreated the cluster only this time deployed hcloud-cloud-controller-manager with Networking option (Used Flannel instead of Cilium) and now when I install ambassador API gateway can see the loadbalancer being created.
@MatthiasLohr Thank you so much for your help
You're welcome!
Hi,
I've followed the steps listed below in order to create a kubernetes cluster on hetzner. I've not created FloatingIPs, LoadBalancer and did not use Network Deployment. When I deploy an nginx and specify the service type as LoadBalancer, the external ip is in pending state. I was expecting the hcloud-cloud-controller-manager to assign a loadbalancer ip as the docs state, allows to use Hetzner Cloud Load Balancers with Kubernetes Services
Is there any step which I've missed out or is anyone else facing a similar issue?
Describe on hcloud-cloud-controller-manager gives the following output