Closed md2k closed 5 years ago
@md2k I'll try to investigate it since I'm able to setup PVC on a new cluster created with the the code submitted in #258, I'll try to investigate the issue you reported ASAP, thx for feedback!
i'll double check again today evening, maybe i'm overlooked something.
Hi @mavimo maybe you can advice what to do with CSI?
I0312 18:52:55.121932 1 csi_handler.go:524] Can't get CSINodeInfo k8s-dev-worker-04: csinodeinfos.csi.storage.k8s.io "k8s-dev-worker-04" not found
I0312 18:52:55.121981 1 csi_handler.go:388] Saving attach error to "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:52:55.138679 1 csi_handler.go:398] Saved attach error to "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:52:55.138740 1 csi_handler.go:103] Error processing "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc": failed to attach: node "k8s-dev-worker-04" has no NodeID annotation
I0312 18:52:55.138671 1 controller.go:139] Ignoring VolumeAttachment "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc" change
I0312 18:55:21.106674 1 controller.go:173] Started VA processing "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:55:21.106750 1 csi_handler.go:93] CSIHandler: processing VA "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:55:21.106766 1 csi_handler.go:120] Attaching "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:55:21.106781 1 csi_handler.go:259] Starting attach operation for "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:55:21.106975 1 csi_handler.go:214] PV finalizer is already set on "pvc-9d45199b-44f7-11e9-888a-9600001dd99b"
I0312 18:55:21.112911 1 csi_handler.go:524] Can't get CSINodeInfo k8s-dev-worker-04: csinodeinfos.csi.storage.k8s.io "k8s-dev-worker-04" not found
I0312 18:55:21.112978 1 csi_handler.go:388] Saving attach error to "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:55:21.124336 1 csi_handler.go:398] Saved attach error to "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc"
I0312 18:55:21.124387 1 csi_handler.go:103] Error processing "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc": failed to attach: node "k8s-dev-worker-04" has no NodeID annotation
I0312 18:55:21.124817 1 controller.go:139] Ignoring VolumeAttachment "csi-4f85d0c027dcff14997168efbc3674ba47e378f12476131bbd23449ee74359bc" change
PVC and PV created... but it won't mount
i found that nodes where csi filed to attach pv i not have such annotation: csi.volume.kubernetes.io/nodeid: {"csi.hetzner.cloud":"xxxxxxx"}
, if i understood correctly they set by hcloud-csi-node
?
Ok this one problem solved , template for Hetzner CSI nodes does not have toleration section and i'm use taints. this one missing from pods spec:
tolerations:
- operator: Exists
effect: NoExecute
- operator: Exists
effect: NoSchedule
to deploy daemonset to any node, but this more finding not related to hetzner-kube itslef. Maybe better even to integrate deployment of hcloud CSI to cluster during cluster roll-out or as plugin.
@md2k currently implements it as plugin (I mean do not enable CSI on cluster setup but update it after cluster creation) is a bit tricky, maybe we can keep the current implementation on the "core" and create plugin to setup all needed configs, like:
CSIDriver
CSINodeInfo
WDYT?
yeah for exactly current moment , true we need add it as plugin because otherwise we need re-create clusters with a tools, but probably after plugin will be in place we can set as flag, to enable plugin during roll-out for new clusters?
i can take a look into, but as with previous PR from me, i'm a bit run-out of time this and next week (main work), after i'm planing back to PR and have few more ideas in mind.
@md2k np, i'm also a bit busy and I have no time to support you on the other PR (sorry), I'll try to find some more time... I need a Tardis to find some more time 🕐
Why closed? I guess adding
extraArgs:
feature-gates: CSINodeInfo=true,CSIDriverRegistry=true
for the apiServer
in the ClusterConfiguration
(v1beta1) would fix it.
@suchwerk thx, I'll try it and after test send a PR (I hope today)
You should also ensure that the iscsid on the node(s) is enabled and started.
@suchwerk after few test I'm unable to find a solution, so need more investigation.
Adding:
apiServer:
extraArgs:
feature-gates: CSINodeInfo=true,CSIDriverRegistry=true
seems ignore it. Adding:
featureGates:
CSINodeInfo: true
CSIDriverRegistry: true
generate indicate that this feature gate are not supported:
stdout:[featureGates: Invalid value: map[string]bool{"CSIDriverRegistry":true, "CSINodeInfo":true}: CSINodeInfo is not a valid feature name., featureGates: Invalid value: map[string]bool{"CSIDriverRegistry":true, "CSINodeInfo":true}: CSIDriverRegistry is not a valid feature name.]
I'll continue to investigate it on the next days, If you have time to test and do you find a working solution ping me!
I forgot to mention that you should also add to each node:
cat << EOF > /etc/default/kubelet
KUBELET_EXTRA_ARGS="--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true"
EOF
systemctl restart kubelet
and the ClusterConfiguration/KubeletConfiguration:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
...
apiServer:
certSANs:
...
extraArgs:
feature-gates: CSINodeInfo=true,CSIDriverRegistry=true
etcd:
external:
endpoints:
...
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
CSINodeInfo: true
CSIDriverRegistry: true
I am not sure if the KubletConfiguration is a kind of redundant.
@suchwerk I think it is (redundant), maybe I need to include feature gate using:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
...
apiServer:
certSANs:
...
featureGates:
CSINodeInfo: true
CSIDriverRegistry: true
etcd:
external:
endpoints:
...
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
CSINodeInfo: true
CSIDriverRegistry: true
syntax, I'll try ASAP and keep you updated.
Thx to point me in the right direction (I hope :P )
For me featureGates in the apiServer section did not work. (kubeadm.k8s.io/v1beta1) but:
extraArgs:
feature-gates: CSINodeInfo=true,CSIDriverRegistry=true
And also consider: In Kubernetes 1.14 (will be relased on Monday) CSI Topology support is build in.
I did not experienced any issues with the Hetzner CSI driver. I just ssh'ed into the server, enabled feature gates in kubelet and kube-apiserver and followed the "Getting Started".
I tested this and it worked. Therefore closed by #272
Hi @mavimo i think your last PR GH-258 for issue #256 not fully fix the problem. We need add feature-gate options to kube-apiserver as well, otherwise CSI plugin failed with messages i described here (https://github.com/hetznercloud/csi-driver/issues/26)