vmware-archive / kubernetes-archived

This repository is archived. Please file in-tree vSphere Cloud Provider issues at https://github.com/kubernetes/kubernetes/issues . CSI Driver for vSphere is available at https://github.com/kubernetes/cloud-provider-vsphere
Apache License 2.0
46 stars 31 forks source link

Failed to provision volume with StorageClass "thin-disk": folder XXX not found #499

Closed cormachogan closed 6 years ago

cormachogan commented 6 years ago

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Testing out persistent volumes with PKS, I hit the following error.

root@pks-cli:~# cat cormac-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-disk provisioner: kubernetes.io/vsphere-volume parameters: diskformat: thin storagePolicyName: RAID-5 datastore: vsanDatastore

root@pks-cli:~# cat cormac-pvc-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cormac-slave-claim annotations: volume.beta.kubernetes.io/storage-class: thin-disk spec: accessModes:

root@pks-cli:~# kubectl apply -f cormac-sc.yaml storageclass "thin-disk" created

root@pks-cli:~# kubectl apply -f cormac-pvc-claim.yaml persistentvolumeclaim "cormac-slave-claim" created

root@pks-cli:~# kubectl describe sc Name: thin-disk IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"thin-disk","namespace":""},"parameters":{"datastore":"vsanDatastore","diskformat":"thin","storagePolicyName":"RAID-5"},"provisioner":"kubernetes.io/vsphere-volume"}

Provisioner: kubernetes.io/vsphere-volume Parameters: datastore=vsanDatastore,diskformat=thin,storagePolicyName=RAID-5 ReclaimPolicy: Delete Events:

root@pks-cli:~# kubectl describe pvc Name: cormac-slave-claim Namespace: default StorageClass: thin-disk Status: Pending Volume: Labels: Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"thin-disk"},"name":"cormac-slav... volume.beta.kubernetes.io/storage-class=thin-disk volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume Finalizers: [] Capacity: Access Modes: Events: Type Reason Age From Message


Warning ProvisioningFailed (x2 over 6s) persistentvolume-controller Failed to provision volume with StorageClass "thin-disk": folder '/CH-Datacenter/vm/pcf_vms/7b70b6a2-4ae5-42b1-83f9-5dc189881c99' not found root@pks-cli:~#

After getting this message I manually created a folder on my vCenter server to match the name above, and then everything worked.

root@pks-cli:~# kubectl delete pvc cormac-slave-claim persistentvolumeclaim "cormac-slave-claim" deleted

root@pks-cli:~# kubectl delete sc thin-disk storageclass "thin-disk" deleted

root@pks-cli:~# kubectl apply -f cormac-sc.yaml storageclass "thin-disk" created

root@pks-cli:~# kubectl apply -f cormac-pvc-claim.yaml persistentvolumeclaim "cormac-slave-claim" created

root@pks-cli:~# kubectl describe pvc Name: cormac-slave-claim Namespace: default StorageClass: thin-disk Status: Bound Volume: pvc-02196bbb-89cc-11e8-939b-005056826ff1 Labels: Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"thin-disk"},"name":"cormac-slav... pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes volume.beta.kubernetes.io/storage-class=thin-disk volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume Finalizers: [] Capacity: 2Gi Access Modes: RWO Events: Type Reason Age From Message


Normal ProvisioningSucceeded 5m persistentvolume-controller Successfully provisioned volume pvc-02196bbb-89cc-11e8-939b-005056826ff1 using kubernetes.io/vsphere-volume root@pks-cli:~#

What you expected to happen:

That I did not need to create this folder in advance - that the act of creating the PVC would do this automatically.

How to reproduce it (as minimally and precisely as possible):

All commands are shown above.

Anything else we need to know?:

This is a PKS environment - Pivotal Container Services.

Environment:

divyenpatel commented 6 years ago

@cormachogan

This happens because, for provisioning volumes using SPBM, we are creating dummy/shadow VM temporarily to apply storage policy on the disk.

This dummy VM is created in the resource pool and vm folder specified in the vsphere.conf file. If vm folder or resource pool is not present, we are failing volume create operation.

See code block at https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/vsphere/vsphere_util.go#L261

func (vs *VSphere) setVMOptions(ctx context.Context, dc *vclib.Datacenter, resourcePoolPath string) (*vclib.VMOptions, error) {
    var vmOptions vclib.VMOptions
    resourcePool, err := dc.GetResourcePool(ctx, resourcePoolPath)
    if err != nil {
        return nil, err
    }
    glog.V(9).Infof("Resource pool path %s, resourcePool %+v", resourcePoolPath, resourcePool)
    folder, err := dc.GetFolderByPath(ctx, vs.cfg.Workspace.Folder)
    if err != nil {
        return nil, err
    }
    vmOptions.VMFolder = folder
    vmOptions.VMResourcePool = resourcePool
    return &vmOptions, nil
}
cormachogan commented 6 years ago

Ah - that explains it. Thanks @divyenpatel

Could we add this detail to the VCP example - https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html

The current note about the ‘vsphere’ cloudprovider configuration is not very clear.

cormachogan commented 6 years ago

Follow up question @divyenpatel - where do I find this vsphere.conf file? I'm using PKS and the other folder I see in the PKS tile is the "Stored VMs Folder" and this is set to pcf_vms, which exists. I need to created a subdirectory in pcf_vms to get the VCP to work.

divyenpatel commented 6 years ago

@cormachogan you can find location of the vsphere.conf file from the controller-manager or API server's manifest files on the master node, and in kubelet config file.

look for --cloud-config parameter in above files.

divyenpatel commented 6 years ago

Issue is resolved.