hpe-storage / csi-driver

A Container Storage Interface (CSI) driver from HPE
https://scod.hpedev.io
Apache License 2.0
60 stars 55 forks source link

Create clone from snapshot or clone from pvc fails "Body length 0" error #289

Open alexbarta opened 3 years ago

alexbarta commented 3 years ago

Hi all,

I am using hpe csi driver 2.0.0 w/ Primera FC and w/ k8s 1.20.2 After enabling CSI snapshot

git clone https://github.com/kubernetes-csi/external-snapshotter
cd external-snapshotter

# Kubernetes 1.20 and newer
git checkout tags/v4.0.0 -b release-4.0

and creating volumesnapshot class

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: hpe-snapshot
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
driver: csi.hpe.com
deletionPolicy: Delete
parameters:
  description: "Snapshot created by the HPE CSI Driver"
  csi.storage.k8s.io/snapshotter-secret-name: hpe-backend
  csi.storage.k8s.io/snapshotter-secret-namespace: hpe-storage
  csi.storage.k8s.io/snapshotter-list-secret-name: hpe-backend
  csi.storage.k8s.io/snapshotter-list-secret-namespace: hpe-storage

I am able to create volume snapshot correctly

# k get pvc my-first-pvc1
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-first-pvc1   Bound    pvc-760bdebc-a625-4dd6-86f4-de7bce1af1c4   2Gi        RWO            hpe-primera    42h

# k get volumesnapshot
NAME           READYTOUSE   SOURCEPVC       SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                    CREATIONTIME   AGE
my-snapshot2   true         my-first-pvc1                           2Gi           hpe-snapshot    snapcontent-70a455b2-eba8-4400-8a95-de201ddecc3c   11s            12s

but when creating a pvc from snap or pvc from pvc

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc-from-snapshot
spec:
  dataSource:
    name: my-snapshot2
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: hpe-primera

I get this error

# k describe pvc my-pvc-from-snapshot
Name:          my-pvc-from-snapshot
Namespace:     default
StorageClass:  hpe-primera
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
DataSource:
  APIGroup:  snapshot.storage.k8s.io
  Kind:      VolumeSnapshot
  Name:      my-snapshot2
Used By:     <none>
Events:
  Type     Reason                Age                From                                                            Message
  ----     ------                ----               ----                                                            -------
  Normal   Provisioning          24s (x6 over 86s)  csi.hpe.com_hpecp14.local_ae153ec4-f172-48bf-904d-e606c15ca7aa  External provisioner is provisioning volume for claim "default/my-pvc-from-snapshot"
  Warning  ProvisioningFailed    18s (x6 over 80s)  csi.hpe.com_hpecp14.local_ae153ec4-f172-48bf-904d-e606c15ca7aa  failed to provision volume with StorageClass "hpe-primera": rpc error: code = Internal desc = Failed to clone-create volume pvc-b894308c-aa0d-4656-b659-ea80809a8833, Post http://primera3par-csp-svc:8080/containers/v1/volumes: http: ContentLength=448 with Body length 0
  Normal   ExternalProvisioning  3s (x7 over 86s)   persistentvolume-controller                                     waiting for a volume to be created, either by external provisioner "csi.hpe.com" or manually created by system administrator

Sounds like the driver's waiting for somebody else to create an empty destination volume?

datamattsson commented 3 years ago

I think you need to check the logs of the CSP to see what errors out. @sneharai4 @bhagyashree-sarawate @pavansshanbhag are you able to assist here?

alexbarta commented 3 years ago

I have checked the csi controller logs but I didn't find anything interesting:

k logs  hpe-csi-controller-74fc6dbd8b-t4gzp -n hpe-storage -c hpe-csi-driver
...
time="2021-09-24T07:31:31Z" level=info msg="Requested for capacity bytes: 2147483648" file="controller_server.go:220"
time="2021-09-24T07:31:31Z" level=info msg="Looking up PVC with uid b894308c-aa0d-4656-b659-ea80809a8833" file="flavor.go:500"
time="2021-09-24T07:31:31Z" level=info msg="Found the following claims: [&PersistentVolumeClaim{ObjectMeta:{my-pvc-from-snapshot  default  b894308c-aa0d-4656-b659-ea80809a8833 1259753 0 2021-09-22 09:01:11 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolumeClaim\",\"metadata\":{\"annotations\":{},\"name\":\"my-pvc-from-snapshot\",\"namespace\":\"default\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"dataSource\":{\"apiGroup\":\"snapshot.storage.k8s.io\",\"kind\":\"VolumeSnapshot\",\"name\":\"my-snapshot2\"},\"resources\":{\"requests\":{\"storage\":\"2Gi\"}},\"storageClassName\":\"hpe-primera\"}}\n volume.beta.kubernetes.io/storage-provisioner:csi.hpe.com] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-09-22 09:01:11 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2021-09-22 09:01:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 100 97 116 97 83 111 117 114 99 101 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 71 114 111 117 112 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 115 116 111 114 97 103 101 67 108 97 115 115 78 97 109 101 34 58 123 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*hpe-primera,VolumeMode:*Filesystem,DataSource:&TypedLocalObjectReference{APIGroup:*snapshot.storage.k8s.io,Kind:VolumeSnapshot,Name:my-snapshot2,},},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}]" file="flavor.go:509"
time="2021-09-24T07:31:31Z" level=info msg="Configuring annotations on PVC &PersistentVolumeClaim{ObjectMeta:{my-pvc-from-snapshot  default  b894308c-aa0d-4656-b659-ea80809a8833 1259753 0 2021-09-22 09:01:11 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolumeClaim\",\"metadata\":{\"annotations\":{},\"name\":\"my-pvc-from-snapshot\",\"namespace\":\"default\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"dataSource\":{\"apiGroup\":\"snapshot.storage.k8s.io\",\"kind\":\"VolumeSnapshot\",\"name\":\"my-snapshot2\"},\"resources\":{\"requests\":{\"storage\":\"2Gi\"}},\"storageClassName\":\"hpe-primera\"}}\n volume.beta.kubernetes.io/storage-provisioner:csi.hpe.com] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-09-22 09:01:11 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2021-09-22 09:01:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 100 97 116 97 83 111 117 114 99 101 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 71 114 111 117 112 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 115 116 111 114 97 103 101 67 108 97 115 115 78 97 109 101 34 58 123 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*hpe-primera,VolumeMode:*Filesystem,DataSource:&TypedLocalObjectReference{APIGroup:*snapshot.storage.k8s.io,Kind:VolumeSnapshot,Name:my-snapshot2,},},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}" file="flavor.go:167"
time="2021-09-24T07:31:31Z" level=info msg="allowOverrides nfsResources,nfsNamespace" file="flavor.go:520"
time="2021-09-24T07:31:31Z" level=info msg="processing key: nfsResources" file="flavor.go:525"
time="2021-09-24T07:31:31Z" level=info msg="processing key: nfsNamespace" file="flavor.go:525"
time="2021-09-24T07:31:31Z" level=info msg="resulting override keys :[]string{\"nfsResources\", \"nfsNamespace\", \"nfsPVC\"}" file="flavor.go:534"
time="2021-09-24T07:31:31Z" level=error msg="status code was 404 Not Found for request: action=GET path=http://primera3par-csp-svc:8080/containers/v1/volumes?name=pvc-b894308c-aa0d-4656-b659-ea80809a8833, attempting to decode error response." file="client.go:184"
time="2021-09-24T07:31:31Z" level=info msg="About to create a new clone 'pvc-b894308c-aa0d-4656-b659-ea80809a8833' from snapshot snapshot-70a455b2-eba8-4400-8a95-de201ddecc3c with options map[access_protocol:fc allow_overrides:nfsResources,nfsNamespace host_encryption:false multi_initiator:false nfs_provisioner_image:192.168.150.2:5000/hpestorage/nfs-provisioner:v1.0.0]" file="controller_server.go:514"
time="2021-09-24T07:31:37Z" level=error msg="Volume creation failed, err: rpc error: code = Internal desc = Failed to clone-create volume pvc-b894308c-aa0d-4656-b659-ea80809a8833, Post http://primera3par-csp-svc:8080/containers/v1/volumes: http: ContentLength=448 with Body length 0" file="controller_server.go:240"
time="2021-09-24T07:31:37Z" level=error msg="GRPC error: rpc error: code = Internal desc = Failed to clone-create volume pvc-b894308c-aa0d-4656-b659-ea80809a8833, Post http://primera3par-csp-svc:8080/containers/v1/volumes: http: ContentLength=448 with Body length 0" file="utils.go:73"

any other log that may be worth checking?

alexbarta commented 3 years ago

we found out why: "cpg" parameter was missing in our storage class configuration, actually the driver didn't know where to place the new volume. I think you can close this one.