Closed kissi7 closed 2 years ago
I'm seeing a very similar error. I'm running v0.4.0 on a kube 1.14 cluster on a preview release of fedora coreos.
I had to mod the flexdriver container to point at a different volume plugin dir, since the destination for the driver is hard-coded to a location under /usr/libexec, which is read-only on rpm-ostree-based systems -- it's possible that that has something to do w/ the error I'm seeing, but if so it's not obvious to me.
I'm going to try w/ master and w/ an earlier version to see if it matters.
In my case on ubuntu (RKE install of k8s) is /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ writable. Due to kubelet is running in the docker I had to propagate the folder plugin from host to kubelet conteiner (found guide at: https://rook.io/docs/rook/master/flexvolume.html under the "Rancher" section)
see the driver is populated and look is does something if I run it from the kubelet
root@k8s-node1:~# docker exec -it kubelet /bin/bash
root@k8s-node1:/# /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver init
{"status":"Success","message":"success","Capabilities":{"attach":true}}root@k8s-node1:/#
@kissi7 So, is it working for you now?
No, it does not attach disk to VM in the oVirt
I need more from the kube-controller-manager logs to see the exact error when it tries to fire the attach or mount callouts. I don't see them in the snippet. Also the journal from node1 would help.
my controller log looks like this:
$ kubectl logs -n kube-system -l component=kube-controller-manager --since=10m
I0401 22:43:48.333528 1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"98e22a46-54cf-11e9-9cc4-566fd5ed000e", APIVersion:"v1", ResourceVersion:"726195", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
E0401 22:43:48.334145 1 pvc_protection_controller.go:138] PVC default/1g-ovirt-cow-disk failed with : Operation cannot be fulfilled on persistentvolumeclaims "1g-ovirt-cow-disk": the object has been modified; please apply your changes to the latest version and try again
I0401 22:43:48.506639 1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"98e22a46-54cf-11e9-9cc4-566fd5ed000e", APIVersion:"v1", ResourceVersion:"726196", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
I0401 22:43:48.677888 1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"98e22a46-54cf-11e9-9cc4-566fd5ed000e", APIVersion:"v1", ResourceVersion:"726200", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
I0401 22:43:50.791972 1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"98e22a46-54cf-11e9-9cc4-566fd5ed000e", APIVersion:"v1", ResourceVersion:"726210", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
E0401 22:43:51.988616 1 pv_protection_controller.go:116] PV pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e failed with : Operation cannot be fulfilled on persistentvolumes "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e": the object has been modified; please apply your changes to the latest version and try again
this is from the node:
$ journalctl -u kubelet --since "10 minutes ago" --no-pager
-- Logs begin at Fri 2019-02-22 18:14:07 UTC, end at Mon 2019-04-01 22:49:02 UTC. --
Apr 01 22:40:05 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:40:05.915510 945 kubelet.go:1657] Unable to mount volumes for pod "testpodwithflex_default(c31b8105-525b-11e9-8d2e-566fd5ed0009)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]; skipping pod
Apr 01 22:40:05 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:40:05.915552 945 pod_workers.go:190] Error syncing pod c31b8105-525b-11e9-8d2e-566fd5ed0009 ("testpodwithflex_default(c31b8105-525b-11e9-8d2e-566fd5ed0009)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]
Apr 01 22:40:18 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: W0401 22:40:18.923923 945 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4b429124-525a-11e9-9cc4-566fd5ed000e
Apr 01 22:40:24 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:40:24.810390 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-4b429124-525a-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4b429124-525a-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "c31b8105-525b-11e9-8d2e-566fd5ed0009")
Apr 01 22:40:24 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:40:24.814453 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4b429124-525a-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:42:26.814425505 +0000 UTC m=+273642.210745941 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-4b429124-525a-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4b429124-525a-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"c31b8105-525b-11e9-8d2e-566fd5ed0009\") "
Apr 01 22:41:45 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:41:45.657287 945 container_manager_linux.go:490] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified. cpu: /, memory: /system.slice/kubelet.service
Apr 01 22:42:21 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:42:21.915327 945 kubelet.go:1657] Unable to mount volumes for pod "testpodwithflex_default(c31b8105-525b-11e9-8d2e-566fd5ed0009)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]; skipping pod
Apr 01 22:42:21 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:42:21.915371 945 pod_workers.go:190] Error syncing pod c31b8105-525b-11e9-8d2e-566fd5ed0009 ("testpodwithflex_default(c31b8105-525b-11e9-8d2e-566fd5ed0009)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]
Apr 01 22:42:26 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:42:26.879139 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-4b429124-525a-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4b429124-525a-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "c31b8105-525b-11e9-8d2e-566fd5ed0009")
Apr 01 22:42:26 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:42:26.883106 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4b429124-525a-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:44:28.88307986 +0000 UTC m=+273764.279400313 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-4b429124-525a-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4b429124-525a-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"c31b8105-525b-11e9-8d2e-566fd5ed0009\") "
Apr 01 22:42:33 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: W0401 22:42:33.043172 945 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4b429124-525a-11e9-9cc4-566fd5ed000e
Apr 01 22:42:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:42:53.036290 945 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-jwktx" (UniqueName: "kubernetes.io/secret/c31b8105-525b-11e9-8d2e-566fd5ed0009-default-token-jwktx") pod "c31b8105-525b-11e9-8d2e-566fd5ed0009" (UID: "c31b8105-525b-11e9-8d2e-566fd5ed0009")
Apr 01 22:42:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:42:53.038680 945 operation_generator.go:815] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b8105-525b-11e9-8d2e-566fd5ed0009-default-token-jwktx" (OuterVolumeSpecName: "default-token-jwktx") pod "c31b8105-525b-11e9-8d2e-566fd5ed0009" (UID: "c31b8105-525b-11e9-8d2e-566fd5ed0009"). InnerVolumeSpecName "default-token-jwktx". PluginName "kubernetes.io/secret", VolumeGidValue ""
Apr 01 22:42:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:42:53.136554 945 reconciler.go:301] Volume detached for volume "default-token-jwktx" (UniqueName: "kubernetes.io/secret/c31b8105-525b-11e9-8d2e-566fd5ed0009-default-token-jwktx") on node "node4.osas.lab.eng.rdu2.redhat.com" DevicePath ""
Apr 01 22:43:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: W0401 22:43:53.104211 945 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e
Apr 01 22:43:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:43:53.157595 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:43:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:43:53.157677 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jwktx" (UniqueName: "kubernetes.io/secret/98f5fd40-54cf-11e9-9cc4-566fd5ed000e-default-token-jwktx") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:43:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:43:53.157749 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:43:53.657705521 +0000 UTC m=+273729.054026005 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:43:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: W0401 22:43:53.597653 945 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e
Apr 01 22:43:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:43:53.658821 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:43:53 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:43:53.658977 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:43:54.658940532 +0000 UTC m=+273730.055261018 (durationBeforeRetry 1s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:43:54 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:43:54.661239 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:43:54 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:43:54.661342 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:43:56.661314389 +0000 UTC m=+273732.057634830 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:43:56 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:43:56.666507 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:43:56 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:43:56.666605 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:44:00.666575859 +0000 UTC m=+273736.062896297 (durationBeforeRetry 4s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:44:00 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:44:00.676055 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:44:00 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:44:00.688535 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:44:08.688503208 +0000 UTC m=+273744.084823655 (durationBeforeRetry 8s). Error: "Volume not attached according to node status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:44:08 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:44:08.695043 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:44:08 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:44:08.698580 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:44:24.698532172 +0000 UTC m=+273760.094852614 (durationBeforeRetry 16s). Error: "Volume not attached according to node status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:44:24 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:44:24.732163 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:44:24 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:44:24.784456 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:44:56.784424842 +0000 UTC m=+273792.180745280 (durationBeforeRetry 32s). Error: "Volume not attached according to node status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:44:35 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:44:35.916453 945 kubelet.go:1657] Unable to mount volumes for pod "testpodwithflex_default(c31b8105-525b-11e9-8d2e-566fd5ed0009)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002 default-token-jwktx]. list of unattached volumes=[pv0002 default-token-jwktx]; skipping pod
Apr 01 22:44:35 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:44:35.917097 945 pod_workers.go:190] Error syncing pod c31b8105-525b-11e9-8d2e-566fd5ed0009 ("testpodwithflex_default(c31b8105-525b-11e9-8d2e-566fd5ed0009)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002 default-token-jwktx]. list of unattached volumes=[pv0002 default-token-jwktx]
Apr 01 22:44:56 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:44:56.807331 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:44:56 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:44:56.812034 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:46:00.812004248 +0000 UTC m=+273856.208324692 (durationBeforeRetry 1m4s). Error: "Volume not attached according to node status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:45:56 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:45:56.294022 945 kubelet.go:1657] Unable to mount volumes for pod "testpodwithflex_default(98f5fd40-54cf-11e9-9cc4-566fd5ed000e)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]; skipping pod
Apr 01 22:45:56 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:45:56.295032 945 pod_workers.go:190] Error syncing pod 98f5fd40-54cf-11e9-9cc4-566fd5ed000e ("testpodwithflex_default(98f5fd40-54cf-11e9-9cc4-566fd5ed000e)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]
Apr 01 22:46:00 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:46:00.849060 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:46:00 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:46:00.852786 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:48:02.852759023 +0000 UTC m=+273978.249079463 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:46:10 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: W0401 22:46:10.981075 945 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e
Apr 01 22:46:45 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:46:45.657701 945 container_manager_linux.go:490] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified. cpu: /, memory: /system.slice/kubelet.service
Apr 01 22:48:02 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: I0401 22:48:02.922771 945 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e") pod "testpodwithflex" (UID: "98f5fd40-54cf-11e9-9cc4-566fd5ed000e")
Apr 01 22:48:02 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:48:02.947780 945 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\"" failed. No retries permitted until 2019-04-01 22:50:04.947754044 +0000 UTC m=+274100.344074480 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e\") pod \"testpodwithflex\" (UID: \"98f5fd40-54cf-11e9-9cc4-566fd5ed000e\") "
Apr 01 22:48:13 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:48:13.915518 945 kubelet.go:1657] Unable to mount volumes for pod "testpodwithflex_default(98f5fd40-54cf-11e9-9cc4-566fd5ed000e)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]; skipping pod
Apr 01 22:48:13 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: E0401 22:48:13.915566 945 pod_workers.go:190] Error syncing pod 98f5fd40-54cf-11e9-9cc4-566fd5ed000e ("testpodwithflex_default(98f5fd40-54cf-11e9-9cc4-566fd5ed000e)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-jwktx]
Apr 01 22:48:27 node4.osas.lab.eng.rdu2.redhat.com kubelet[945]: W0401 22:48:27.042754 945 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-98e22a46-54cf-11e9-9cc4-566fd5ed000e
docker logs kube-controller-manager --follow --since=10m
I0402 11:21:04.955702 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"6779e224-5539-11e9-8b6d-001a4a16010c", APIVersion:"v1", ResourceVersion:"2398062", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
E0402 11:21:04.959042 1 pvc_protection_controller.go:138] PVC default/1g-ovirt-cow-disk failed with : Operation cannot be fulfilled on persistentvolumeclaims "1g-ovirt-cow-disk": the object has been modified; please apply your changes to the latest version and try again
I0402 11:21:04.969277 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"6779e224-5539-11e9-8b6d-001a4a16010c", APIVersion:"v1", ResourceVersion:"2398064", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
I0402 11:21:05.080857 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"6779e224-5539-11e9-8b6d-001a4a16010c", APIVersion:"v1", ResourceVersion:"2398072", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
I0402 11:21:07.093669 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"6779e224-5539-11e9-8b6d-001a4a16010c", APIVersion:"v1", ResourceVersion:"2398084", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
I0402 11:21:09.160102 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"6779e224-5539-11e9-8b6d-001a4a16010c", APIVersion:"v1", ResourceVersion:"2398104", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
I0402 11:21:09.554015 1 pv_controller.go:824] volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" entered phase "Bound"
I0402 11:21:09.554061 1 pv_controller.go:963] volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" bound to claim "default/1g-ovirt-cow-disk"
E0402 11:21:09.576937 1 pv_protection_controller.go:116] PV pvc-6779e224-5539-11e9-8b6d-001a4a16010c failed with : Operation cannot be fulfilled on persistentvolumes "pvc-6779e224-5539-11e9-8b6d-001a4a16010c": the object has been modified; please apply your changes to the latest version and try again
I0402 11:21:09.597200 1 pv_controller.go:768] claim "default/1g-ovirt-cow-disk" entered phase "Bound"
W0402 11:22:54.677330 1 plugins.go:845] FindExpandablePluginBySpec(pvc-9c075643-513f-11e9-987d-001a4a160111) -> err:no volume plugin matched
W0402 11:22:54.677427 1 plugins.go:845] FindExpandablePluginBySpec(pvc-84b4b91e-5140-11e9-8c76-001a4a16010c) -> err:no volume plugin matched
W0402 11:22:54.677463 1 plugins.go:842] FindExpandablePluginBySpec(pvc-6779e224-5539-11e9-8b6d-001a4a16010c) -> returning noopExpandableVolumePluginInstance
W0402 11:24:54.677706 1 plugins.go:845] FindExpandablePluginBySpec(pvc-9c075643-513f-11e9-987d-001a4a160111) -> err:no volume plugin matched
W0402 11:24:54.678123 1 plugins.go:845] FindExpandablePluginBySpec(pvc-84b4b91e-5140-11e9-8c76-001a4a16010c) -> err:no volume plugin matched
W0402 11:24:54.678209 1 plugins.go:842] FindExpandablePluginBySpec(pvc-6779e224-5539-11e9-8b6d-001a4a16010c) -> returning noopExpandableVolumePluginInstance
docker logs kubelet --follow --since=10m
I0402 11:18:55.599340 1479 plugins.go:563] Loaded volume plugin "flexvolume-ovirt/ovirt-flexvolume-driver"
I0402 11:21:01.543181 1479 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
I0402 11:21:09.655848 1479 kubelet.go:1908] SyncLoop (ADD, "api"): "testpodwithflex_default(677c3ecc-5539-11e9-8b6d-001a4a16010c)"
W0402 11:21:09.790612 1479 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-6779e224-5539-11e9-8b6d-001a4a16010c
I0402 11:21:09.791410 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
I0402 11:21:09.791781 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-q22vr" (UniqueName: "kubernetes.io/secret/677c3ecc-5539-11e9-8b6d-001a4a16010c-default-token-q22vr") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:21:09.792146 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:21:10.292090943 +0000 UTC m=+78012.138152734 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
I0402 11:21:09.892386 1479 reconciler.go:252] operationExecutor.MountVolume started for volume "default-token-q22vr" (UniqueName: "kubernetes.io/secret/677c3ecc-5539-11e9-8b6d-001a4a16010c-default-token-q22vr") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
I0402 11:21:09.918113 1479 operation_generator.go:571] MountVolume.SetUp succeeded for volume "default-token-q22vr" (UniqueName: "kubernetes.io/secret/677c3ecc-5539-11e9-8b6d-001a4a16010c-default-token-q22vr") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
I0402 11:21:10.294743 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:21:10.295430 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:21:11.295371032 +0000 UTC m=+78013.141432883 (durationBeforeRetry 1s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
I0402 11:21:11.347854 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:21:11.348002 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:21:13.347959385 +0000 UTC m=+78015.194021178 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
I0402 11:21:13.368623 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:21:13.376581 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:21:17.376522225 +0000 UTC m=+78019.222584017 (durationBeforeRetry 4s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
I0402 11:21:17.389792 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:21:17.397848 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:21:25.397793578 +0000 UTC m=+78027.243855391 (durationBeforeRetry 8s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
I0402 11:21:25.416404 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:21:25.426350 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:21:41.426034157 +0000 UTC m=+78043.272095946 (durationBeforeRetry 16s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
I0402 11:21:41.437208 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:21:41.450006 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:22:13.449968644 +0000 UTC m=+78075.296030436 (durationBeforeRetry 32s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
I0402 11:22:13.473302 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:22:13.480263 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:23:17.480198527 +0000 UTC m=+78139.326260318 (durationBeforeRetry 1m4s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
E0402 11:23:12.760937 1479 kubelet.go:1680] Unable to mount volumes for pod "testpodwithflex_default(677c3ecc-5539-11e9-8b6d-001a4a16010c)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-q22vr]; skipping pod
E0402 11:23:12.761531 1479 pod_workers.go:190] Error syncing pod 677c3ecc-5539-11e9-8b6d-001a4a16010c ("testpodwithflex_default(677c3ecc-5539-11e9-8b6d-001a4a16010c)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-q22vr]
I0402 11:23:17.518182 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:23:17.524267 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:25:19.524231075 +0000 UTC m=+78261.370292863 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
W0402 11:23:25.930055 1479 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-6779e224-5539-11e9-8b6d-001a4a16010c
I0402 11:25:19.603459 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:25:19.625879 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:27:21.625816242 +0000 UTC m=+78383.471878043 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
E0402 11:25:28.910529 1479 kubelet.go:1680] Unable to mount volumes for pod "testpodwithflex_default(677c3ecc-5539-11e9-8b6d-001a4a16010c)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-q22vr]; skipping pod
E0402 11:25:28.910985 1479 pod_workers.go:190] Error syncing pod 677c3ecc-5539-11e9-8b6d-001a4a16010c ("testpodwithflex_default(677c3ecc-5539-11e9-8b6d-001a4a16010c)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-q22vr]
W0402 11:25:43.974331 1479 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-6779e224-5539-11e9-8b6d-001a4a16010c
I0402 11:26:01.544393 1479 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
I0402 11:27:21.688595 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:27:21.717938 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:29:23.717895178 +0000 UTC m=+78505.563956971 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
E0402 11:27:46.911314 1479 kubelet.go:1680] Unable to mount volumes for pod "testpodwithflex_default(677c3ecc-5539-11e9-8b6d-001a4a16010c)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-q22vr]; skipping pod
E0402 11:27:46.913231 1479 pod_workers.go:190] Error syncing pod 677c3ecc-5539-11e9-8b6d-001a4a16010c ("testpodwithflex_default(677c3ecc-5539-11e9-8b6d-001a4a16010c)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-q22vr]
W0402 11:27:57.948041 1479 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-6779e224-5539-11e9-8b6d-001a4a16010c
I0402 11:29:23.803919 1479 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6779e224-5539-11e9-8b6d-001a4a16010c" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c") pod "testpodwithflex" (UID: "677c3ecc-5539-11e9-8b6d-001a4a16010c")
E0402 11:29:23.811740 1479 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\"" failed. No retries permitted until 2019-04-02 11:31:25.811507465 +0000 UTC m=+78627.657569283 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-6779e224-5539-11e9-8b6d-001a4a16010c\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-6779e224-5539-11e9-8b6d-001a4a16010c\") pod \"testpodwithflex\" (UID: \"677c3ecc-5539-11e9-8b6d-001a4a16010c\") "
You mentioned you have a containerized kubelet, this means that flexvolume is not exposed to the master node? it should be available for master as well.
Yes, it is containerised kubelet, but host path is exposed to kubelet and kubelet is running on all nodes in the cluster, see
[root@lab ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready controlplane,etcd,worker 6d5h v1.13.5 10.1.1.171 <none> Ubuntu 18.04.2 LTS 4.15.0-46-generic docker://18.9.3
k8s-node2 Ready controlplane,etcd,worker 6d5h v1.13.5 10.1.1.172 <none> Ubuntu 18.04.2 LTS 4.15.0-46-generic docker://18.9.3
k8s-node3 Ready controlplane,etcd,worker 6d5h v1.13.5 10.1.1.173 <none> Ubuntu 18.04.2 LTS 4.15.0-46-generic docker://18.9.3
k8s-node4 Ready worker 6d5h v1.13.5 10.1.1.174 <none> Ubuntu 18.04.2 LTS 4.15.0-46-generic docker://18.9.3
k8s-node5 Ready worker 6d5h v1.13.5 10.1.1.175 <none> Ubuntu 18.04.2 LTS 4.15.0-46-generic docker://18.9.3
k8s-node6 Ready worker 6d5h v1.13.5 10.1.1.176 <none> Ubuntu 18.04.2 LTS 4.15.0-46-generic docker://18.9.3
[root@lab ~]# for i in {1..6}; do ssh 10.1.1.17$i 'hostname; ls -la /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/'; done
k8s-node1
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
k8s-node2
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
k8s-node3
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
k8s-node4
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
k8s-node5
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
k8s-node6
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
[root@lab ~]# for i in {1..6}; do ssh 10.1.1.17$i 'docker exec -it kubelet hostname; ls -la /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/'; done
the input device is not a TTY
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
the input device is not a TTY
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
the input device is not a TTY
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
the input device is not a TTY
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
the input device is not a TTY
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
the input device is not a TTY
total 33280
drwx------ 2 root root 4096 Apr 2 13:18 .
drwxr-xr-x 3 root root 4096 Apr 2 13:18 ..
-rwxr-xr-x 1 root root 34062456 Apr 2 13:18 ovirt-flexvolume-driver
-rw-r--r-- 1 root root 164 Apr 2 13:18 ovirt-flexvolume-driver.conf
Running in to same issue on my k8s cluster as well. Volume gets created in storage domain but fails to attach to worker node during pod scheduling. Here is an output from 'kubectl describe pods'
Type Reason Age From Message
Warning FailedScheduling 5m36s (x10 over 5m45s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times) Normal Scheduled 5m36s default-scheduler Successfully assigned default/testpodwithflex to kubeworker2 Warning FailedMount 76s (x2 over 3m34s) kubelet, kubeworker2 Unable to mount volumes for pod "testpodwithflex_default(7394694f-5677-11e9-b7f6-060000080001)": timeout expired waiting for volumes to attach or mount for pod "default"/"testpodwithflex". list of unmounted volumes=[pv0002]. list of unattached volumes=[pv0002 default-token-cxhfh]
Getting the same issue on k8s 1.12.2 on 3 controller, 3 worker cluster installed via kubespray on top of RHHI 1.5. Had to change the plugindir to /var/lib/kubelet/volume-plugins/, the volume is created but not mounted to the VM. The engine log seems to indicate that the VM-name in the api call is blank.
Here is the relevant engine.log
2019-04-04 09:21:38,246-05 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-17) [4c4b87c0] Lock freed to object 'EngineLock:{exclusiveLocks='[280922c6-d886-4841-be63-b2d3af659975=PROVIDER]', sharedLocks=''}'
2019-04-04 09:23:04,951-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] Running command: AddDiskCommand internal: false. Entities affected : ID: 13029b09-8e73-4d9b-aa8a-7ee5bf4946c0 Type: StorageAction group CREATE_DISK with role type USER
2019-04-04 09:23:04,981-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 13029b09-8e73-4d9b-aa8a-7ee5bf4946c0 Type: Storage
2019-04-04 09:23:05,022-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] START, CreateImageVDSCommand( CreateImageVDSCommandParameters:{storagePoolId='5a2ee1d6-0099-02f9-02c7-000000000199', ignoreFailoverLimit='false', storageDomainId='13029b09-8e73-4d9b-aa8a-7ee5bf4946c0', imageGroupId='ab1adcb5-ee76-4cf6-8ed7-4086986d293f', imageSizeInBytes='1073741824', volumeFormat='RAW', newImageId='be0c5479-8463-4eb9-a54e-66082b268a23', imageType='Sparse', newImageDescription='{"DiskAlias":"pvc-2332d221-56e5-11e9-a471-001a4a168674","DiskDescription":""}', imageInitialSizeInBytes='0'}), log id: ea14892
2019-04-04 09:23:05,023-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] -- executeIrsBrokerCommand: calling 'createVolume' with two new parameters: description and UUID
2019-04-04 09:23:05,148-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] FINISH, CreateImageVDSCommand, return: be0c5479-8463-4eb9-a54e-66082b268a23, log id: ea14892
2019-04-04 09:23:05,157-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'c576816d-da07-49bd-844a-fa777ba55f66'
2019-04-04 09:23:05,157-05 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] CommandMultiAsyncTasks::attachTask: Attaching task 'dde1e719-4fe0-4989-afae-3edc21404327' to command 'c576816d-da07-49bd-844a-fa777ba55f66'.
2019-04-04 09:23:05,184-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] Adding task 'dde1e719-4fe0-4989-afae-3edc21404327' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2019-04-04 09:23:05,206-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] BaseAsyncTask::startPollingTask: Starting to poll task 'dde1e719-4fe0-4989-afae-3edc21404327'.
2019-04-04 09:23:05,233-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-64) [8df967e8-5cf6-4089-87c8-1069666ea823] EVENT_ID: USER_ADD_DISK(2,020), Add-Disk operation of 'pvc-2332d221-56e5-11e9-a471-001a4a168674' was initiated by admin@internal-authz.
2019-04-04 09:23:07,215-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-30) [8df967e8-5cf6-4089-87c8-1069666ea823] Command 'AddDisk' (id: '70f6b70f-dd5e-469b-9971-733918a4f4b2') waiting on child command id: 'c576816d-da07-49bd-844a-fa777ba55f66' type:'AddImageFromScratch' to complete
2019-04-04 09:23:11,221-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [8df967e8-5cf6-4089-87c8-1069666ea823] Command 'AddDisk' (id: '70f6b70f-dd5e-469b-9971-733918a4f4b2') waiting on child command id: 'c576816d-da07-49bd-844a-fa777ba55f66' type:'AddImageFromScratch' to complete
2019-04-04 09:23:12,668-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-56) [] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2019-04-04 09:23:12,679-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-56) [] SPMAsyncTask::PollTask: Polling task 'dde1e719-4fe0-4989-afae-3edc21404327' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2019-04-04 09:23:12,686-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-56) [] BaseAsyncTask::onTaskEndSuccess: Task 'dde1e719-4fe0-4989-afae-3edc21404327' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2019-04-04 09:23:12,688-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-56) [] CommandAsyncTask::endActionIfNecessary: All tasks of command 'c576816d-da07-49bd-844a-fa777ba55f66' has ended -> executing 'endAction'
2019-04-04 09:23:12,688-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-56) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'c576816d-da07-49bd-844a-fa777ba55f66'): calling endAction '.
2019-04-04 09:23:12,689-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1515463) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2019-04-04 09:23:12,697-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] Command [id=c576816d-da07-49bd-844a-fa777ba55f66]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2019-04-04 09:23:12,697-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2019-04-04 09:23:12,697-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2019-04-04 09:23:12,697-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'dde1e719-4fe0-4989-afae-3edc21404327'
2019-04-04 09:23:12,699-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='5a2ee1d6-0099-02f9-02c7-000000000199', ignoreFailoverLimit='false', taskId='dde1e719-4fe0-4989-afae-3edc21404327'}), log id: 22697b78
2019-04-04 09:23:12,701-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] START, HSMClearTaskVDSCommand(HostName = ch3-rhv1.pr.edgelp.net, HSMTaskGuidBaseVDSCommandParameters:{hostId='0959b405-24e3-4847-b353-ad3351c2c5a8', taskId='dde1e719-4fe0-4989-afae-3edc21404327'}), log id: 531f623e
2019-04-04 09:23:12,724-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] FINISH, HSMClearTaskVDSCommand, log id: 531f623e
2019-04-04 09:23:12,724-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] FINISH, SPMClearTaskVDSCommand, log id: 22697b78
2019-04-04 09:23:12,731-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] BaseAsyncTask::removeTaskFromDB: Removed task 'dde1e719-4fe0-4989-afae-3edc21404327' from DataBase
2019-04-04 09:23:12,731-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1515463) [8df967e8-5cf6-4089-87c8-1069666ea823] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'c576816d-da07-49bd-844a-fa777ba55f66'
2019-04-04 09:23:19,228-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [8df967e8-5cf6-4089-87c8-1069666ea823] Command 'AddDisk' id: '70f6b70f-dd5e-469b-9971-733918a4f4b2' child commands '[c576816d-da07-49bd-844a-fa777ba55f66]' executions were completed, status 'SUCCEEDED'
2019-04-04 09:23:20,249-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [8df967e8-5cf6-4089-87c8-1069666ea823] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2019-04-04 09:23:20,260-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [8df967e8-5cf6-4089-87c8-1069666ea823] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
2019-04-04 09:23:20,267-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [8df967e8-5cf6-4089-87c8-1069666ea823] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='5a2ee1d6-0099-02f9-02c7-000000000199', ignoreFailoverLimit='false', storageDomainId='13029b09-8e73-4d9b-aa8a-7ee5bf4946c0', imageGroupId='ab1adcb5-ee76-4cf6-8ed7-4086986d293f', imageId='be0c5479-8463-4eb9-a54e-66082b268a23'}), log id: 57e8bae9
2019-04-04 09:23:20,269-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [8df967e8-5cf6-4089-87c8-1069666ea823] START, GetVolumeInfoVDSCommand(HostName = ch3-rhv1.pr.edgelp.net, GetVolumeInfoVDSCommandParameters:{hostId='0959b405-24e3-4847-b353-ad3351c2c5a8', storagePoolId='5a2ee1d6-0099-02f9-02c7-000000000199', storageDomainId='13029b09-8e73-4d9b-aa8a-7ee5bf4946c0', imageGroupId='ab1adcb5-ee76-4cf6-8ed7-4086986d293f', imageId='be0c5479-8463-4eb9-a54e-66082b268a23'}), log id: 653cb916
2019-04-04 09:23:20,301-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [8df967e8-5cf6-4089-87c8-1069666ea823] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@d29daacf, log id: 653cb916
2019-04-04 09:23:20,302-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [8df967e8-5cf6-4089-87c8-1069666ea823] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@d29daacf, log id: 57e8bae9
2019-04-04 09:23:20,316-05 WARN [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [] VM is null - no unlocking
2019-04-04 09:23:20,345-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'pvc-2332d221-56e5-11e9-a471-001a4a168674' was successfully added.
2019-04-04 09:24:28,222-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] Setting new tasks map. The map contains now 0 tasks
kube-controller-manager log is unhelpful
I0404 14:23:03.825856 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"1g-ovirt-cow-disk", UID:"2332d221-56e5-11e9-a471-001a4a168674", APIVersion:"v1", ResourceVersion:"28360584", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "ovirt-volume-provisioner" or manually created by system administrator
I0404 14:23:05.309454 1 pv_controller.go:824] volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" entered phase "Bound"
I0404 14:23:05.309485 1 pv_controller.go:963] volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" bound to claim "default/1g-ovirt-cow-disk"
I0404 14:23:05.335923 1 pv_controller.go:768] claim "default/1g-ovirt-cow-disk" entered phase "Bound"
here's the node log:
Apr 04 10:23:05 work-2 kubelet[19101]: I0404 10:23:05.339164 19101 kubelet.go:1883] SyncLoop (ADD, "api"): "testpodwithflex_default(23347d4a-56e5-11e9-a471-001a4a168674)"
Apr 04 10:23:05 work-2 kubelet[19101]: W0404 10:23:05.396928 19101 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-2332d221-56e5-11e9-a471-001a4a168674
Apr 04 10:23:05 work-2 kubelet[19101]: I0404 10:23:05.465001 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9ttwp" (UniqueName: "kubernetes.io/secret/23347d4a-56e5-11e9-a471-001a4a168674-default-token-9ttwp") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:05 work-2 kubelet[19101]: I0404 10:23:05.465046 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:05 work-2 kubelet[19101]: E0404 10:23:05.465112 19101 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-04 10:23:05.965089802 -0400 EDT m=+1797047.280209809 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-2332d221-56e5-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"23347d4a-56e5-11e9-a471-001a4a168674\") "
Apr 04 10:23:05 work-2 kubelet[19101]: I0404 10:23:05.565428 19101 reconciler.go:252] operationExecutor.MountVolume started for volume "default-token-9ttwp" (UniqueName: "kubernetes.io/secret/23347d4a-56e5-11e9-a471-001a4a168674-default-token-9ttwp") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:05 work-2 kubelet[19101]: I0404 10:23:05.571433 19101 operation_generator.go:568] MountVolume.SetUp succeeded for volume "default-token-9ttwp" (UniqueName: "kubernetes.io/secret/23347d4a-56e5-11e9-a471-001a4a168674-default-token-9ttwp") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:05 work-2 kubelet[19101]: I0404 10:23:05.966356 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:05 work-2 kubelet[19101]: E0404 10:23:05.966461 19101 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-04 10:23:06.966435436 -0400 EDT m=+1797048.281555444 (durationBeforeRetry 1s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-2332d221-56e5-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"23347d4a-56e5-11e9-a471-001a4a168674\") "
Apr 04 10:23:06 work-2 kubelet[19101]: I0404 10:23:06.969095 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:06 work-2 kubelet[19101]: E0404 10:23:06.969166 19101 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-04 10:23:08.969143726 -0400 EDT m=+1797050.284263733 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-2332d221-56e5-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"23347d4a-56e5-11e9-a471-001a4a168674\") "
Apr 04 10:23:08 work-2 kubelet[19101]: I0404 10:23:08.973879 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:08 work-2 kubelet[19101]: E0404 10:23:08.974040 19101 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-04 10:23:12.973997838 -0400 EDT m=+1797054.289117844 (durationBeforeRetry 4s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-2332d221-56e5-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"23347d4a-56e5-11e9-a471-001a4a168674\") "
Apr 04 10:23:10 work-2 kubelet[19101]: I0404 10:23:10.950402 19101 setters.go:72] Using node IP: "10.86.100.92"
Apr 04 10:23:12 work-2 kubelet[19101]: I0404 10:23:12.985275 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:12 work-2 kubelet[19101]: E0404 10:23:12.989728 19101 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-04 10:23:20.989693711 -0400 EDT m=+1797062.304813739 (durationBeforeRetry 8s). Error: "Volume not attached according to node status for volume \"pvc-2332d221-56e5-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"23347d4a-56e5-11e9-a471-001a4a168674\") "
Apr 04 10:23:20 work-2 kubelet[19101]: I0404 10:23:20.974100 19101 setters.go:72] Using node IP: "10.86.100.92"
Apr 04 10:23:21 work-2 kubelet[19101]: I0404 10:23:21.009265 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:21 work-2 kubelet[19101]: E0404 10:23:21.013295 19101 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-04 10:23:37.013271336 -0400 EDT m=+1797078.328391343 (durationBeforeRetry 16s). Error: "Volume not attached according to node status for volume \"pvc-2332d221-56e5-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"23347d4a-56e5-11e9-a471-001a4a168674\") "
Apr 04 10:23:28 work-2 kubelet[19101]: E0404 10:23:28.885675 19101 dns.go:131] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.86.100.53 192.168.100.100 192.168.100.102
Apr 04 10:23:31 work-2 kubelet[19101]: I0404 10:23:31.001174 19101 setters.go:72] Using node IP: "10.86.100.92"
Apr 04 10:23:31 work-2 kubelet[19101]: E0404 10:23:31.885547 19101 dns.go:131] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.86.100.53 192.168.100.100 192.168.100.102
Apr 04 10:23:37 work-2 kubelet[19101]: I0404 10:23:37.066786 19101 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2332d221-56e5-11e9-a471-001a4a168674" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674") pod "testpodwithflex" (UID: "23347d4a-56e5-11e9-a471-001a4a168674")
Apr 04 10:23:37 work-2 kubelet[19101]: E0404 10:23:37.071366 19101 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-04 10:24:09.071336882 -0400 EDT m=+1797110.386456887 (durationBeforeRetry 32s). Error: "Volume not attached according to node status for volume \"pvc-2332d221-56e5-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-2332d221-56e5-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"23347d4a-56e5-11e9-a471-001a4a168674\") "
@smithian where is the blank vm in the attach API call in ovirt-engine.log? look for AttachDiskToVm
If that is the case, I think what happens is that I can't extract the target vm id. To get that the flex driver is trying to fetch the node systemUUID from the api. Its like invoking 'kubectl get nodes' but in order for that to work it needs ~/.kube/config available. So if that's the master node who does that you need to make sure you got the ~/.kube/config
or KUBECONFIG
env variable set up for all masters and nodes
@rgolangh I tried running following directly inside ovirt-flexvolume-driver docker
ovirt-flexvolume-driver attach '{ "ovirtStorageDomain": "My-storage-domain", "ovirtVolumeName": "pv0003", "ovirtDiskFormat": "raw", "kubernetes.io/fsType": "ext4", "kubernetes.io/readwrite": "rw", "capacity": "1Gi" }' kubeworker2
It reported following error:
ERROR: logging before flag.Parse: W0404 23:52:12.435902 292 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. nodes is forbidden: User "system:serviceaccount:default:default" cannot list resource "nodes" in API group "" at the cluster scope
Wonder if thats causing this issue. I copied ~/.kube/config from master to compute nodes but same error. I tried this on around last week of December and it was working fine, any chance following patch might have caused regression in k8s environment:
Also noticed getSystemUUIDByNodeName is not mapping to ovirt vm UUID, right? getKubeNodes returns kubernetes node UUID right...am I missing something here.
I did not have on any host node populated ~/.kube/config (I am running containerised k8s base on RKE) just on my "k8s client" machine. If I run ovirt-flexvolume-driver attach I got same error as @jagtapdeepak. Once I copied out ~/.kube/config across all host nodes it started to do something ;)
root@k8s-node1:/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver# ./ovirt-flexvolume-driver attach '{"ovirtStorageDomain": "pole_200G_B", "ovirtVolumeName": "pvc-222adf28-5708-11e9-9c13-001a4a160110", "ovirtDiskFormat": "raw", "kubernetes.io/fsType": "ext4", "kubernetes.io/readwrite": "rw", "capacity": "1Gi"}' k8s-node1
ERROR: logging before flag.Parse: W0405 10:00:41.410968 24518 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
ERROR: logging before flag.Parse: W0405 10:00:41.411031 24518 client_config.go:557] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
invalid configuration: no configuration has been provided
once I copied to all nodes ~/.kube/config I got some response from ovirt API
root@k8s-node1:/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver# ./ovirt-flexvolume-driver attach '{"ovirtStorageDomain": "pole_200G_B", "ovirtVolumeName": "pvc-222adf28-5708-11e9-9c13-001a4a160110", "ovirtDiskFormat": "raw", "kubernetes.io/fsType": "ext4", "kubernetes.io/readwrite": "rw", "capacity": "1Gi"}' k8s-node1
{"status":"Failure","message":"disk doesn't exist"}
engine.log
2019-04-05 10:12:31,862+02 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-210) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2019-04-05 10:12:32,075+02 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-210) [323e77f0] Running command: CreateUserSessionCommand internal: false.
2019-04-05 10:12:32,110+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-210) [323e77f0] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '10.1.1.171' using session 'E2nRhHa47kSeizqU5SE0CG6Q7F+YqtOCw9txWQXHnTHapzhXOP6Xaovkw0eg4DhEKu2R2OZOpGwCSNbaKfJRGQ==' logged in.
2019-04-05 10:12:32,346+02 WARN [org.ovirt.engine.core.bll.SearchQuery] (default task-210) [b6efe298-a12e-456c-ac5d-d6f667496d72] ResourceManager::searchBusinessObjects - Invalid search text - ''Disks : name=''
ovirt API query:
curl -u admin@internal:XXXXXXXX --insecure https://ovirt2.XXXXXXX/ovirt-engine/api/disks?search=pvc-222adf28-5708-11e9-9c13-001a4a160110
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<disks>
<disk href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a" id="8949ec57-7f41-4d92-b683-1eefe750974a">
<actions>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/reduce" rel="reduce"/>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/move" rel="move"/>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/refreshlun" rel="refreshlun"/>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/copy" rel="copy"/>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/sparsify" rel="sparsify"/>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/export" rel="export"/>
</actions>
<name>pvc-222adf28-5708-11e9-9c13-001a4a160110</name>
<description></description>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/permissions" rel="permissions"/>
<link href="/ovirt-engine/api/disks/8949ec57-7f41-4d92-b683-1eefe750974a/statistics" rel="statistics"/>
<actual_size>1073741824</actual_size>
<alias>pvc-222adf28-5708-11e9-9c13-001a4a160110</alias>
<content_type>data</content_type>
<format>cow</format>
<image_id>7f379d32-11be-4c40-9e56-56711c670829</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>1073741824</provisioned_size>
<qcow_version>qcow2_v3</qcow_version>
<shareable>false</shareable>
<sparse>true</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<total_size>0</total_size>
<wipe_after_delete>false</wipe_after_delete>
<disk_profile href="/ovirt-engine/api/diskprofiles/826baec9-a922-41b5-8b02-2f73aae66b0f" id="826baec9-a922-41b5-8b02-2f73aae66b0f"/>
<quota href="/ovirt-engine/api/datacenters/de3890a0-32a0-11e8-95be-5254008463bd/quotas/23549508-32a1-11e8-931b-5254008463bd" id="23549508-32a1-11e8-931b-5254008463bd"/>
<storage_domains>
<storage_domain href="/ovirt-engine/api/storagedomains/a74c32e3-ddd5-4f06-9d9c-3ba7aa153d98" id="a74c32e3-ddd5-4f06-9d9c-3ba7aa153d98"/>
</storage_domains>
</disk>
</disks>
I hope my json is correct during my test. Look it has some regression during the search + it does not call the ovirt-flexvolume-driver at all as i see in the ovirt some attempt only if I run it manually by hand not during pod creation. Also to have ~/.kube/config on all nodes it seem to me "over kill" if there is on each node ovirt-flexvolume-driver.conf populated with ovirtVmId. Pod is starting on node XYZ, ovirt-flexvolume-driver pick from that node ovirtVmId and call ovirt-api to attach disk. I do not see to have ~/.kube/config on all nodes (but most probably I am missing something from the whole k8s architecture, so sorry for that)
@jagtapdeepak @kissi7 your json key ovirtVolumeName is incorrect, it should be kubernetes.io/pvOrVolumeName. (as per internal/definitions.go)
I was able to attach the volume with this:
"ovirtStorageDomain": "Data",
"kubernetes.io/pvOrVolumeName": "pvc-8396e603-57bb-11e9-a471-001a4a168674",
"kubernetes.io/fsType": "ext4",
"kubernetes.io/readwrite": "rw",
"capacity": "1Gi"
}' work-2
And then mount with
"ovirtStorageDomain": "Data",
"kubernetes.io/pvOrVolumeName": "pvc-8396e603-57bb-11e9-a471-001a4a168674",
"kubernetes.io/fsType": "ext4",
"kubernetes.io/readwrite": "rw",
"capacity": "1Gi"
}'
{"status":"Success","message":""}
But I still get the error: Volume not attached according to node status for volume \"pvc-8396e603-57bb-11e9-a471-001a4a168674\"
Apr 05 17:33:02 work-2 kubelet[9695]: E0405 17:33:02.414926 9695 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-8396e603-57bb-11e9-a471-001a4a168674\"" failed. No retries permitted until 2019-04-05 17:35:04.414876299 -0400 EDT m=+16973.873181275 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-8396e603-57bb-11e9-a471-001a4a168674\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-8396e603-57bb-11e9-a471-001a4a168674\") pod \"testpodwithflex\" (UID: \"83997fc3-57bb-11e9-a471-001a4a168674\") "
Even though doing a kubectl get node work-2 -o yaml shows the pv in use by the node...
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-11-19T19:17:10Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: work-2
node-role.kubernetes.io/node: ""
name: work-2
...
osImage: Red Hat Enterprise Linux Server 7.5 (Maipo)
systemUUID: 0E2489F9-96B8-4BA4-8520-951CF5C8F6E2
volumesInUse:
- flexvolume-ovirt/ovirt-flexvolume-driver/pvc-8396e603-57bb-11e9-a471-001a4a168674
I'm not really sure what's missing here.
@smithian Thanks for correction. I went with the Json specification in cmd/ovirt-flexvolume-driver/integ-tests_spec.go. I tried your Json format as well but same error "nodes is forbidden: User "system:serviceaccount:default:default" cannot list resource "nodes" in API group".
@rgolangh Is there a document explaining build environment & steps to build this code? I tried building it but running in to issues may be related to version incompatibility.
Here is what I did so far: Install go version 1.12 go get github.com/oVirt/ovirt-openshift-extensions cd github.com/oVirt/ovirt-openshift-extensions make
Would appreciate your input.
deepak@deepak-Inspiron-15-5578:~/go/src/github.com/oVirt/ovirt-openshift-extensions$ make go clean git clean -dfx -e .idea* dep ensure --update Solving failure: Could not introduce k8s.io/kubernetes@v1.12.7, as it requires problematic packages from k8s.io/apiserver (current version kubernetes-1.11.1): k8s.io/apiserver/pkg/apis/config is missing.k8s.io/apiserver/pkg/apis/config/v1alpha1 is missing. Makefile:83: recipe for target 'deps' failed make: *** [deps] Error 1
@smithian I changed the json and run the command from node and kubelet container and it attached the disk. It is still strange why it does not work as designed (looks like k8s does not call the driver)
./ovirt-flexvolume-driver attach '{"ovirtStorageDomain": "pole_200G_B", "kubernetes.io/pvOrVolumeName": "pvc-222adf28-5708-11e9-9c13-001a4a160110", "kubernetes.io/fsType": "raw", "kubernetes.io/readwrite": "ext4", "kubernetes.io/readwrite": "rw", "capacity": "1Gi"}' k8s-node1 {"status":"Success","message":"success","device":"/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_8949ec57-7f41-4d92-b"}
I'll try deploy myself a working kubernetes setup to find the root cause. Can you try to increase the debug logs?
The reason that the kubeconfig is needed, is to convert the nodeName to vm id. Since a master node can perform the api call of the attach(this is by design) and a master node vmName is not compatible with the target node that is running the container. That's why I perform an api call to get the SystemUUID of a certain node. It is possible that the kubeconfig passed to the node is one that isn't authorized to perform get nodes.
On my cluster after doing following changes it attach is working as expected:
Noticed attach was not at all getting invoked in default k8s config, only init & getvolumename were getting invoked. Once I set 'enableControllerAttachDetach: false' in /var/lib/kubelet/config.yaml on all k8s nodes (master & compute) & restarted kublet, it started invoking attach after getvolumename.
Another issue was BuildConfigFromFlags printing some warning messages like glog.Warningf("Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.") glog.Warning("error creating inClusterConfig, falling back to default config: ", err) Which was causing k8s reconciler to run in to issues as it was expecting json output from driver. After the removing all instances of getSystemUUIDByNodeName in ovirt-flexvolume-driver.go with following code snippet and patching ovirt-flexvolume-driver on all node it started working as expected: /* vmId, err := getSystemUUIDByNodeName(nodeName) if err != nil { return internal.FailedResponseFromError(err), err }
vm, err := ovirt.GetVMById(vmId)*/ vm, err := ovirt.GetVM(nodeName)
I saw the same thing as @jagtapdeepak, that attach a was not being called in the controller-manager log. However I wanted to keep controller-managed attachment as that is the preferred method (as per link 2 below).
In my case, there were two problems. The first was that the non-default flexvolume driver path had not been exposed to the controller-manager, so the controller could not call the driver. The second was that after the driver was accessible to the controller-manager, the kube-client.go from the ovirt-flexvolume driver could not find the kube-config that was specified in the controller-manager manifest.
I resolved this by adding the following to my manifest files on all controllers at /etc/kubernetes/manifests/kube-controller-manager.manifest:
spec:
containers:
- name: kube-controller-manager
- controller-manager
- --kubeconfig=/etc/kubernetes/kube-controller-manager-kubeconfig.yaml **#this was existing but not recognized by the driver**
- --flex-volume-plugin-dir=/var/lib/kubelet/volume-plugins
env:
- name: KUBECONFIG
value: "/etc/kubernetes/kube-controller-manager-kubeconfig.yaml"
volumeMounts:
- mountPath: /var/lib/kubelet/volume-plugins
name: flexvolume-mount
readonly: true
volumes:
- name: flexvolume-mount
hostPath:
path: /var/lib/kubelet/volume-plugins
It would be helpful if the kube-client.go would have some more smarts to figure out the location of the kubeconfig file without relying on default paths or an environment variable, perhaps by reading the --kubeconfig flag passed to the pod. This is working for now though.
I found these links to be helpful in troubleshooting, for anyone who comes after me:
Same issue here. Disk is created and pv is bound to pvc. Disk is also deleted when pod and pvc are deleted. However, disk is not attached to VM and consequently pod has unbound pvc.
ovirt/rhevm version: 4.2.8.3-0.1.el7ev. Cluster nodes: CentOS 7
Cluster installed with kubespray 2.9:
ovirt user is StorageAdmin
and UserVMManager
.
Copied ~/.kube/config
from master to node.
It's a fresh cluster and I didn't change anything else.
Let me know if you'd like additional info (logs, etc.).
Do you see the call to attach the disk on the flex driver in the node in kube-controller-manager logs? in the node logs? look for both journal and the kube-controller-manger container log. Do you see in /var/log/ovirt-engine/engine.log the call to AttachDiskToVm? See here for log examples #110
@rgolangh See logs here: https://gist.github.com/Timoses/1c910e6c314a091a3ac70118cc3e13af
Some time around 09:20 I created the testpodwithflex
.
@Timoses According to your node journal your kubelet can't find the plugin specified by your volume. Kubespray sets the KUBELET_VOLUME_PLUGIN to /var/lib/kubelet/volume-plugins by default, so the example flexvolume.yaml for the Daemonset will not work. You can fix this by either redeploying kubespray with the kubelet_flexvolumes_plugins_dir variable set to /usr/libexec/kubernetes/kubelet-plugins/volume/exec/, or by modifying the daemonset yaml to install the plugin under /var/lib/kubelet/volume-plugins.
You'll also need to pass through the volume plugin directory to the controller-manager, see my comment above for details: https://github.com/oVirt/ovirt-openshift-extensions/issues/127#issuecomment-484688495
@smithian Thanks for the hints!! That definitely makes sense. Kubespray already sets the --volume-plugin-dir=/var/lib/kubelet/volume-plugins
. However, the kube-controller-manager is not set up.
- hostPath:
path: /var/lib/kubelet/volume-plugins/
type: ""
name: plugindir
I tried both /var/lib/kubelet/volume-plugins/
and /var/lib/kubelet/volume-plugins/volume/exec/
.
I checked the host's folder and the plugin seems to be correctly installed to:
The kubelet now spams:
May 02 09:19:49 kubernetes-dev-2-3.mgmt.hss.int kubelet[5741]: E0502 09:19:49.686752 5741 plugins.go:642] Error dynamically probing plugins: Error creating Flexvolume plugin from directory volume, skipping. Error: unexpected end of JSON input
May 02 09:19:49 kubernetes-dev-2-3.mgmt.hss.int kubelet[5741]: E0502 09:19:49.687162 5741 driver-call.go:267] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
May 02 09:19:49 kubernetes-dev-2-3.mgmt.hss.int kubelet[5741]: W0502 09:19:49.687183 5741 driver-call.go:150] FlexVolume: driver call failed: executable: /var/lib/kubelet/volume-plugins/volume/volume, args: [init], error: fork/exec /var/lib/kubelet/volume-plugins/volume/volume:
no such file or directory, output: ""
Not sure why it looks for /var/lib/kubelet/volume-plugins/volume/volume
?? That seems odd.
I also adjusted the kube-controller on both masters as you suggested in the linked comment:
--flex-volume-plugin-dir=/var/lib/kubelet/volume-plugins
"/etc/kubernetes/controller-manager.conf"
(copied from --kubeconfig cmd arg)/var/lib/kubelet/volume-plugins
kubectl -n kube-system logs kube-controller says:
E0502 07:32:36.984343 1 reconciler.go:239] attacherDetacher.DetachVolume failed to start for volume "nil" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-abb44af3-6ca6-11e9-94b3-001a4a160161") on node "kubernetes-dev-2-3.mgmt.hss.int" : DetachVolume.FindAttachablePluginBySpec failed for volume "nil" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-abb44af3-6ca6-11e9-94b3-001a4a160161") on node "kubernetes-dev-2-3.mgmt.hss.int" : no volume plugin matched
Any ideas what could be wrong?
You need to also adjust the hostpath to expose /var/lib/kubelet/volume-plugins to the controller container. Double check that by getting a shell into the controller and check if you see the path and its content.
You need to also adjust the hostpath to expose /var/lib/kubelet/volume-plugins to the controller container. Double check that by getting a shell into the controller and check if you see the path and its content.
Yeh, seems to work:
# kubectl -n kube-system exec kube-controller-manager-kubernetes-dev-2-1 -- ls /var/lib/kubelet/volume-plugins/
ovirt~ovirt-flexvolume-driver
volume
# kubectl -n kube-system exec kube-controller-manager-kubernetes-dev-2-2 -- ls /var/lib/kubelet/volume-plugins/
ovirt~ovirt-flexvolume-driver
volume
(The double entry comes because I tried both /var/lib/kubelet/volume-plugins/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver
and /var/lib/kubelet/volume-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver
as hostPath for flexvolume daemonset).
Also ls
the folder ovirt~ovirt-flexvolume-driver to make sure the binary and conf is there.
If its there try to bash into it and invoke
/var/lib/kubelet/volume-plugins/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver init
If that succeeds then might be that a restart to the container would work or mv the ovirt~ovirt-flexvolume-driver
to /tmp and back to the same folder, to help the controller to pick it up again.
Also
ls
the folder ovirt~ovirt-flexvolume-driver to make sure the binary and conf is there.If its there try to bash into it and invoke
/var/lib/kubelet/volume-plugins/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver init
On all hosts of the cluster I can do:
# /var/lib/kubelet/volume-plugins/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver
Usage:
ovirt-flexdriver init
ovirt-flexdriver attach <json params> <nodename>
ovirt-flexdriver detach <mount device> <nodename>
ovirt-flexdriver waitforattach <mount device> <json params>
ovirt-flexdriver mountdevice <mount dir> <mount device> <json params>
ovirt-flexdriver unmountdevice <mount dir>
ovirt-flexdriver isattached <json params> <nodename>
ovirt-flexdriver getvolumename <json params>
Same for /var/lib/kubelet/volume-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver
.
And init
cmd returns the following on all hosts (both paths):
{"status":"Success","message":"success","Capabilities":{"attach":true}}
The ovirt-flexvolume-driver.conf
file exists as well, is equal to the configmap and additionally containing a ovirtVmId
.
Given that I use /var/lib/kubelet/volume-plugins
in the kubelet and kube-controller-manager. Where does the flexvolume.yml hostPath need to be set? /var/lib/kubelet/volume-plugins
or /var/lib/kubelet/volume-plugins/volume/exec
?
If that succeeds then might be that a restart to the container would work or mv the
ovirt~ovirt-flexvolume-driver
to /tmp and back to the same folder, to help the controller to pick it up again.
It seems like the disk is now attached to the node and also to the pod.
The master kubelets still say:
May 02 16:15:58 kubernetes-dev-2-1.mgmt.hss.int kubelet[5273]: E0502 16:15:58.658557 5273 plugins.go:642] Error dynamically probing plugins: Error creating Flexvolume plugin from directory volume, skipping. Error: unexpected end of JSON input
May 02 16:15:58 kubernetes-dev-2-1.mgmt.hss.int kubelet[5273]: E0502 16:15:58.659123 5273 driver-call.go:267] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
May 02 16:15:58 kubernetes-dev-2-1.mgmt.hss.int kubelet[5273]: W0502 16:15:58.659145 5273 driver-call.go:150] FlexVolume: driver call failed: executable: /var/lib/kubelet/volume-plugins/volume/volume, args: [init], error: fork/exec /var/lib/kubelet/volume-plugins/volume/volume:
no such file or directory, output: ""
May 02 16:15:58 kubernetes-dev-2-1.mgmt.hss.int kubelet[5273]: E0502 16:15:58.659209 5273 plugins.go:642] Error dynamically probing plugins: Error creating Flexvolume plugin from directory volume, skipping. Error: unexpected end of JSON input
Given that I use
/var/lib/kubelet/volume-plugins
in the kubelet and kube-controller-manager. Where does the flexvolume.yml hostPath need to be set?/var/lib/kubelet/volume-plugins
or/var/lib/kubelet/volume-plugins/volume/exec
?
Mine is set to /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Given that I use
/var/lib/kubelet/volume-plugins
in the kubelet and kube-controller-manager. Where does the flexvolume.yml hostPath need to be set?/var/lib/kubelet/volume-plugins
or/var/lib/kubelet/volume-plugins/volume/exec
?Mine is set to /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Meaning, your kubelet also points to that (--volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
)? So in the end that is the "plugin-dir" which this is referring to.
I put my flexvolume.yaml and kube-controller-manager.manifest in a gist so it's clearer:
https://gist.github.com/smithian/17f7c62d134082928d0ff0cb783fc21b
Thanks @smithian !! This helps!
I now get the following error: https://gist.github.com/Timoses/ea331d63c9285f9245bcb93703927252 (see controller-manager log)
The --kubeconfig
file is present in the correct dir in the controller container:
# kubectl -n kube-system exec -it kube-controller-manager-kubernetes-dev-2-1.mgmt.hss.int -- ls -la /etc/kubernetes/controller-manager.conf
-rw-------. 1 root root 7076 May 8 11:12 /etc/kubernetes/controller-manager.conf
So I don't get the Neither --kubeconfig nor --master was specified.
message.
Also, what's up with invalid character 'E' looking for beginning of value"
??
Kubelet spams:
May 08 16:38:19 kubernetes-dev-2-3 kubelet[5551]: E0508 16:38:19.334504 5551 desired_state_of_world_populator.go:309] Failed to add volume "pv0002" (specName: "pvc-39cd9acf-719c-11e9-94a8-001a4a160161") for pod "39d1862f-719c-11e9-94a8-001a4a160161" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-39cd9acf-719c-11e9-94a8-001a4a160161" err=no volume plugin matched
Although it has --volume-plugin-dir=/var/lib/kubelet/volume-plugins
specified.
and
[16:39:54 root@kubernetes-dev-2-3 ovirt~ovirt-flexvolume-driver]# pwd
/var/lib/kubelet/volume-plugins/ovirt~ovirt-flexvolume-driver
[16:39:57 root@kubernetes-dev-2-3 ovirt~ovirt-flexvolume-driver]# ./ovirt-flexvolume-driver init
{"status":"Success","message":"success","Capabilities":{"attach":true}}
(As before, the disk is created in oVirt, but not attached).
What am I missing?
Any ideas?
I tried on a clean install of kubespray cluster, changed kubelet to use default flexvolume dir /usr/libexec/kubernetes/kubelet-plugins/volume/exec
and mounted the volume as hostPath
into kube-controller-manager.
I still get kube-controller-manager
log:
W0529 15:28:00.242260 1 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-82d564bb-8224-11e9-9165-001a4a160161
W0529 15:29:00.077065 1 plugins.go:842] FindExpandablePluginBySpec(pvc-82d564bb-8224-11e9-9165-001a4a160161) -> returning noopExpandableVolumePluginInstance
I0529 15:29:18.669599 1 reconciler.go:289] attacherDetacher.AttachVolume started for volume "pvc-82d564bb-8224-11e9-9165-001a4a160161" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-82d564bb-8224-11e9-9165-001a4a160161") from node "kubernetes-dev-2-3.int"
E0529 15:29:18.883109 1 driver-call.go:267] Failed to unmarshal output for command: attach, output: "ERROR: logging before flag.Parse: W0529 15:29:18.882015 109 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.\nERROR: logging before flag.Parse: W0529 15:29:18.882067 109 client_config.go:557] error creating inClusterConfig, falling back to default config: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory\ninvalid configuration: no configuration has been provided", error: invalid character 'E' looking for beginning of value
W0529 15:29:18.883178 1 driver-call.go:150] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver, args: [attach {"kubernetes.io/fsType":"ext4","kubernetes.io/pvOrVolumeName":"pvc-82d564bb-8224-11e9-9165-001a4a160161","kubernetes.io/readwrite":"rw"} kubernetes-dev-2-3.int], error: exit status 1, output: "ERROR: logging before flag.Parse: W0529 15:29:18.882015 109 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.\nERROR: logging before flag.Parse: W0529 15:29:18.882067 109 client_config.go:557] error creating inClusterConfig, falling back to default config: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory\ninvalid configuration: no configuration has been provided"
E0529 15:29:18.883294 1 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-82d564bb-8224-11e9-9165-001a4a160161\"" failed. No retries permitted until 2019-05-29 15:31:20.883262123 +0000 UTC m=+771.698020309 (durationBeforeRetry 2m2s). Error: "AttachVolume.Attach failed for volume \"pvc-82d564bb-8224-11e9-9165-001a4a160161\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-82d564bb-8224-11e9-9165-001a4a160161\") from node \"kubernetes-dev-2-3.int\" : invalid character 'E' looking for beginning of value"
I0529 15:29:18.883596 1 event.go:221] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ovirt", Name:"testpodwithflex", UID:"82d94bf6-8224-11e9-9165-001a4a160161", APIVersion:"v1", ResourceVersion:"289501", FieldPath:""}): type: 'Warning' reason: 'FailedAttachVolume' AttachVolume.Attach failed for volume "pvc-82d564bb-8224-11e9-9165-001a4a160161" : invalid character 'E' looking for beginning of value
Seems like I should have read @smithian comments better : D.
It's the KUBECONFIG
environment variable that helped. But why? The kube-controller-manager executes the ovirt flexvolume driver which fails because the ovirt flexvolume driver does not have a kubeconfig, correct?
Is it a flexvolume
behaviour that the kubeconfig is required by the plugin? Or is it ovirt flexvolume specific? I would like to clarify this to know how to adjust the kubespray installation.
I am also attempting to write a doc page so others may find it easier to configure flexvolume correctly in order to use ovirt flexvolume driver (see here: https://github.com/oVirt/ovirt-openshift-extensions/pull/137 ).
The need for kubeconfig is something specific to ovirt-flexvolume-driver implementation, mainly its needed to identify the node systemUUID, which is the underneath VM ID, to attach the disk to it.
@kissi7 can we close this?
Probably I made it to work by exposing driver dir in kublet and kube-controller docker image. I am using rancher rke to build dockerized k8s cluster. I did a lot of play so I needs build test k8s from the beginning and test it again and if success than we can close it and update/create documentation. Can you guys test it too?
.
.
.
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
kubelet:
extra_env:
- "KUBECONFIG=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml"
extra_args:
volume-plugin-dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
extra_binds:
- /usr/libexec/kubernetes/kubelet-plugins/volume/exec:/usr/libexec/kubernetes/kubelet-plugins/volume/exec
kube-controller:
extra_env:
- "KUBECONFIG=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml"
extra_args:
kubeconfig: /etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml
extra_binds:
- /usr/libexec/kubernetes/kubelet-plugins/volume/exec:/usr/libexec/kubernetes/kubelet-plugins/volume/exec
.
.
.
The best solution would be to add more known locations of the the kubeconfig file to https://github.com/oVirt/ovirt-openshift-extensions/blob/master/cmd/ovirt-flexvolume-driver/kube-client.go#L71 . For example, kubeadm
clusters place it at /etc/kubernetes/controller-manager.conf
.
I'll submit a PR to add this feature
This project is no longer maintained, closing.
Description I am running k8s cluster created by RKE (https://github.com/rancher/rke) and followed the guide at https://github.com/oVirt/ovirt-openshift-extensions/blob/master/docs/Deploying-On-Kubernetes.md New disk is created in the oVirt but it is not attached to VM where pod is running.
Steps To Reproduce
Expected behavior Disk is attached to VM where pod is running.
Versions:
Logs:
The same result for other 2 ovirt-flexvolume-driver pods (with just different VM ID).
Logs after: kubectl create -f test-flex-pod-all-in-one.yaml
engine.log