Open kulkarnicr opened 4 years ago
tried with:
quay.io/k8scsi/snapshot-controller:canary quay.io/ibm-spectrum-scale-dev/ibm-spectrum-scale-csi-driver:dev quay.io/ibm-spectrum-scale-dev/ibm-spectrum-scale-csi-operator:dev quay.io/k8scsi/csi-snapshotter:canary
The issue recreates on latest builds too.
k8s - v1.20.1 IBM Spectrum Scale - 5.1.1.0 210107.122040 apiVersion: snapshot.storage.k8s.io/v1 quay.io/ibm-spectrum-scale-dev/ibm-spectrum-scale-csi-operator:snapshots quay.io/ibm-spectrum-scale-dev/ibm-spectrum-scale-csi-driver:snapshots us.gcr.io/k8s-artifacts-prod/sig-storage/snapshot-controller:v4.0.0
[root@ck-x-master 2021_01_11-02:15:34 test_snapshot]$ df -h
Filesystem Size Used Avail Use% Mounted on
...
fs2 4.0G 3.7G 324M 93% /ibm/fs2
[root@ck-x-master 2021_01_11-02:21:11 test_snapshot]$ kubectl -n ibm-spectrum-scale-csi-driver get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-10mb-1 Bound pvc-32135bed-2951-4740-9104-864aaeb141a6 11Mi RWX ibm-spectrum-scale-csi-1k-inodes 76m
pvc-300mb-fs2 Bound pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66 308Mi RWX sc-indep-fset-fs2 14s
[root@ck-x-master 2021_01_11-02:21:13 test_snapshot]$
[root@ck-x-master 2021_01_11-02:21:29 test_snapshot]$ cd /ibm/fs2/pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66/pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data/
[root@ck-x-master 2021_01_11-02:21:37 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data]$ ls -ltrha
total 1.0K
drwxrwx--x 3 root root 4.0K Jan 11 02:21 ..
drwxrwx--x 2 root root 4.0K Jan 11 02:21 .
[root@ck-x-master 2021_01_11-02:21:38 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data]$ echo ganesha > file1
[root@ck-x-master 2021_01_11-02:21:43 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data]$ mkdir dir1
[root@ck-x-master 2021_01_11-02:21:45 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data]$ yes > bigfile1
yes: standard output: No space left on device
yes: write error
[root@ck-x-master 2021_01_11-02:21:51 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data]$ ^C
[root@ck-x-master 2021_01_11-02:21:53 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data]$ ls -ltrha
total 297M
drwxrwx--x 3 root root 4.0K Jan 11 02:21 ..
-rw-r--r-- 1 root root 8 Jan 11 02:21 file1
drwxr-xr-x 2 root root 4.0K Jan 11 02:21 dir1
drwxrwx--x 3 root root 4.0K Jan 11 02:21 .
-rw-r--r-- 1 root root 300M Jan 11 02:21 bigfile1
[root@ck-x-master 2021_01_11-02:21:54 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66-data]$
[root@ck-x-master 2021_01_11-02:24:26 test_snapshot]$ cat vs-1-fs2.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: vs-1-fs2
spec:
volumeSnapshotClassName: vsclass1
source:
persistentVolumeClaimName: pvc-300mb-fs2
[root@ck-x-master 2021_01_11-02:24:29 test_snapshot]$
[root@ck-x-master 2021_01_11-02:24:31 test_snapshot]$ kubectl -n ibm-spectrum-scale-csi-driver apply -f vs-1-fs2.yaml
volumesnapshot.snapshot.storage.k8s.io/vs-1-fs2 created
[root@ck-x-master 2021_01_11-02:24:38 test_snapshot]$ knvs -w
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
vs-1-fs2 false pvc-300mb-fs2 vsclass1 snapcontent-22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c 2s
vs1-vsclass1 true pvc-10mb-1 11Mi vsclass1 snapcontent-2bff5da3-a3f5-4cb9-b8e1-ffc9c6ab7fc8 53m 53m
^C[root@ck-x-master 2021_01_11-02:27:03 test_snapshot]$
Jan 11 02:28:01 ck-x-master mmfs[13840]: REST-CLI root admin [EXIT, CHANGE] 'mmcrsnapshot fs2 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66:snapshot-22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c' RC=28
[root@ck-x-master 2021_01_11-02:27:03 test_snapshot]$ kubectl -n ibm-spectrum-scale-csi-driver describe volumesnapshot vs-1-fs2
Name: vs-1-fs2
Namespace: ibm-spectrum-scale-csi-driver
Labels: <none>
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshot
Metadata:
Creation Timestamp: 2021-01-11T10:24:38Z
Finalizers:
snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
Generation: 1
Managed Fields:
API Version: snapshot.storage.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:source:
.:
f:persistentVolumeClaimName:
f:volumeSnapshotClassName:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-01-11T10:24:38Z
API Version: snapshot.storage.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
f:status:
.:
f:boundVolumeSnapshotContentName:
f:readyToUse:
Manager: snapshot-controller
Operation: Update
Time: 2021-01-11T10:24:38Z
Resource Version: 392883
UID: 22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c
Spec:
Source:
Persistent Volume Claim Name: pvc-300mb-fs2
Volume Snapshot Class Name: vsclass1
Status:
Bound Volume Snapshot Content Name: snapcontent-22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c
Ready To Use: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreatingSnapshot 2m34s snapshot-controller Waiting for a snapshot ibm-spectrum-scale-csi-driver/vs-1-fs2 to be created by the CSI driver.
[root@ck-x-master 2021_01_11-02:27:12 test_snapshot]$
[root@ck-x-master 2021_01_11-02:27:41 test_snapshot]$ kubectl -n ibm-spectrum-scale-csi-driver delete volumesnapshot vs-1-fs2
volumesnapshot.snapshot.storage.k8s.io "vs-1-fs2" deleted
[root@ck-x-master 2021_01_11-02:28:02 test_snapshot]$
[root@ck-x-master 2021_01_11-02:28:07 test_snapshot]$ kubectl -n ibm-spectrum-scale-csi-driver get volumesnapshot
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
vs1-vsclass1 true pvc-10mb-1 11Mi vsclass1 snapcontent-2bff5da3-a3f5-4cb9-b8e1-ffc9c6ab7fc8 56m 56m
[root@ck-x-master 2021_01_11-02:28:08 test_snapshot]$
[root@ck-x-master 2021_01_11-02:28:09 test_snapshot]$ mmcrsnapshot fs2 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66:snapshot-22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c
Flushing dirty data for snapshot pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66:snapshot-22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c...
Quiescing all file system operations.
No space left on device
Snapshot error: 28, snapName pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66:snapshot-22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c, id 95.
mmcrsnapshot: Command failed. Examine previous error messages to determine cause.
[root@ck-x-master 2021_01_11-02:28:21 test_snapshot]$
Jan 11 02:28:21 ck-x-master mmfs[14138]: CLI root root [EXIT, CHANGE] 'mmcrsnapshot fs2 pvc-5ec6c16a-b3b0-424d-8393-bfa61ea61c66:snapshot-22f7ade4-88b5-4d3a-a7c0-664eddc6ab7c' RC=28
Describe the bug if we run out space on Scale side, then create snapshot fails. However, there is no event to indicate this problem to the end user. Adding an event will help understand why snapshot remains in readytouse=false state.
To Reproduce Steps to reproduce the behavior:
Expected behavior should display an event that explains - create snapshot failed due to no space left.
Environment Please run the following an paste your output here:
Screenshots If applicable, add screenshots to help explain your problem.
Additional context Add any other context about the problem here.