Closed cormachogan closed 6 years ago
@cormachogan
This happens because, for provisioning volumes using SPBM, we are creating dummy/shadow VM temporarily to apply storage policy on the disk.
This dummy VM is created in the resource pool and vm folder specified in the vsphere.conf file. If vm folder or resource pool is not present, we are failing volume create operation.
See code block at https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/vsphere/vsphere_util.go#L261
func (vs *VSphere) setVMOptions(ctx context.Context, dc *vclib.Datacenter, resourcePoolPath string) (*vclib.VMOptions, error) {
var vmOptions vclib.VMOptions
resourcePool, err := dc.GetResourcePool(ctx, resourcePoolPath)
if err != nil {
return nil, err
}
glog.V(9).Infof("Resource pool path %s, resourcePool %+v", resourcePoolPath, resourcePool)
folder, err := dc.GetFolderByPath(ctx, vs.cfg.Workspace.Folder)
if err != nil {
return nil, err
}
vmOptions.VMFolder = folder
vmOptions.VMResourcePool = resourcePool
return &vmOptions, nil
}
Ah - that explains it. Thanks @divyenpatel
Could we add this detail to the VCP example - https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html
The current note about the ‘vsphere’ cloudprovider configuration is not very clear.
Follow up question @divyenpatel - where do I find this vsphere.conf file? I'm using PKS and the other folder I see in the PKS tile is the "Stored VMs Folder" and this is set to pcf_vms, which exists. I need to created a subdirectory in pcf_vms to get the VCP to work.
@cormachogan you can find location of the vsphere.conf
file from the controller-manager or API server's manifest files on the master node, and in kubelet config file.
look for --cloud-config
parameter in above files.
Issue is resolved.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Testing out persistent volumes with PKS, I hit the following error.
root@pks-cli:~# cat cormac-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-disk provisioner: kubernetes.io/vsphere-volume parameters: diskformat: thin storagePolicyName: RAID-5 datastore: vsanDatastore
root@pks-cli:~# cat cormac-pvc-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cormac-slave-claim annotations: volume.beta.kubernetes.io/storage-class: thin-disk spec: accessModes:
root@pks-cli:~# kubectl apply -f cormac-sc.yaml storageclass "thin-disk" created
root@pks-cli:~# kubectl apply -f cormac-pvc-claim.yaml persistentvolumeclaim "cormac-slave-claim" created
root@pks-cli:~# kubectl describe sc Name: thin-disk IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"thin-disk","namespace":""},"parameters":{"datastore":"vsanDatastore","diskformat":"thin","storagePolicyName":"RAID-5"},"provisioner":"kubernetes.io/vsphere-volume"}
Provisioner: kubernetes.io/vsphere-volume Parameters: datastore=vsanDatastore,diskformat=thin,storagePolicyName=RAID-5 ReclaimPolicy: Delete Events:
root@pks-cli:~# kubectl describe pvc Name: cormac-slave-claim Namespace: default StorageClass: thin-disk Status: Pending Volume: Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"thin-disk"},"name":"cormac-slav...
volume.beta.kubernetes.io/storage-class=thin-disk
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume
Finalizers: []
Capacity:
Access Modes:
Events:
Type Reason Age From Message
Warning ProvisioningFailed (x2 over 6s) persistentvolume-controller Failed to provision volume with StorageClass "thin-disk": folder '/CH-Datacenter/vm/pcf_vms/7b70b6a2-4ae5-42b1-83f9-5dc189881c99' not found
root@pks-cli:~#
After getting this message I manually created a folder on my vCenter server to match the name above, and then everything worked.
root@pks-cli:~# kubectl delete pvc cormac-slave-claim persistentvolumeclaim "cormac-slave-claim" deleted
root@pks-cli:~# kubectl delete sc thin-disk storageclass "thin-disk" deleted
root@pks-cli:~# kubectl apply -f cormac-sc.yaml storageclass "thin-disk" created
root@pks-cli:~# kubectl apply -f cormac-pvc-claim.yaml persistentvolumeclaim "cormac-slave-claim" created
root@pks-cli:~# kubectl describe pvc Name: cormac-slave-claim Namespace: default StorageClass: thin-disk Status: Bound Volume: pvc-02196bbb-89cc-11e8-939b-005056826ff1 Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"thin-disk"},"name":"cormac-slav...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-class=thin-disk
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume
Finalizers: []
Capacity: 2Gi
Access Modes: RWO
Events:
Type Reason Age From Message
Normal ProvisioningSucceeded 5m persistentvolume-controller Successfully provisioned volume pvc-02196bbb-89cc-11e8-939b-005056826ff1 using kubernetes.io/vsphere-volume root@pks-cli:~#
What you expected to happen:
That I did not need to create this folder in advance - that the act of creating the PVC would do this automatically.
How to reproduce it (as minimally and precisely as possible):
All commands are shown above.
Anything else we need to know?:
This is a PKS environment - Pivotal Container Services.
Environment:
Kubernetes version (use
kubectl version
): root@pks-cli:~# kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}Cloud provider or hardware configuration: vSphere
OS (e.g. from /etc/os-release): NAME="Ubuntu" VERSION="17.10 (Artful Aardvark)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 17.10" VERSION_ID="17.10" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=artful UBUNTU_CODENAME=artful root@pks-cli:~#
Kernel (e.g.
uname -a
): Linux pks-cli 4.13.0-41-generic #46-Ubuntu SMP Wed May 2 13:38:30 UTC 2018 x86_64 x86_64 x86_64 GNU/LinuxInstall tools:
Others: