vmware-archive / kubernetes-archived

This repository is archived. Please file in-tree vSphere Cloud Provider issues at https://github.com/kubernetes/kubernetes/issues . CSI Driver for vSphere is available at https://github.com/kubernetes/cloud-provider-vsphere
Apache License 2.0
46 stars 31 forks source link

node_vms_folder value seems to be handled differently between vcp-manager and vcp-daementset #475

Open Aestel opened 6 years ago

Aestel commented 6 years ago

Set node_vms_folder in the vcp_secret.yaml to '/Development_VMs/Kubernetes' The VMs working folder starts out as: /SCT/vm/Development_VMs/ When the vcp-manager pod runs it creates the vms folder relative to /SCT/vm/ Creating: /SCT/vm/Development_VMs/Kubernetes

When the daemonset runs it fails with:

+ govc object.mv -dc=SCT /SCT/vm/Development_VMs/my-vm-name-snipped /Development_VMs/Kubernetes
+ '[' 1 -eq 0 ']'
+ ERROR_MSG='Failed to move Node Virtual Machine to the Working Directory Folder'
+ update_VcpConfigStatus vcp-daementset-d68xg '[PHASE 3] Move VM to the Working Directory' FAILED 'Failed to move Node Virtual Machine to the Working Directory Folder'

I connect to the pod to try the govc command given above and get as expected: govc: folder '/Development_VMs/Kubernetes' not found

I have also tried the above by removing the prefixing forward slash which fails in a similar way and by giving the full path which cause the vcp-manager to try to create the folder: /SCT/vm/SCT/vm/Development_VMs/Kubernetes

Docker image pulled was:

REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
cnastorage/enablevcp          v1                  bf7b5e183363        5 weeks ago         534MB
Aestel commented 6 years ago

I worked around this issue by setting the value to '/Development_VMs/Kubernetes' when initially creating the vcp-manager pod. Letting the daemon sets fail. Changing the value to '/SCT/vm/Development_VMs/Kubernetes' and reapplying the secret. Then deleting the vcp-daementset pods.

espigle commented 6 years ago

I want to indicate that I as well have observed the same behavior, and the same workaround seems to work as well. It's cumbersome to work through, but is possible to get through this phase with this workaround.