Open roysahar-ibm opened 3 years ago
deployment.yaml
apiVersion: v1
kind: List
metadata:
name: ibm-csi-block
namespace: kube-system
annotations:
version: "template-v01"
items:
- apiVersion: v1
kind: Namespace
metadata:
name: ibm-csi
- apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: operatorgroup
namespace: ibm-csi
spec:
targetNamespaces:
- ibm-csi
- apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-block-csi-operator
namespace: ibm-csi
labels:
app.kubernetes.io/name: ibm-block-csi-operator
app.kubernetes.io/instance: ibm-block-csi-operator
app.kubernetes.io/managed-by: ibm-block-csi-operator
release: ibm-block-csi-operator
spec:
channel: stable
name: ibm-block-csi-operator
source: certified-operators
sourceNamespace: openshift-marketplace
startingCSV: ibm-block-csi-operator.v1.4.0
installPlanApproval: Automatic
- apiVersion: csi.ibm.com/v1
kind: IBMBlockCSI
metadata:
labels:
app.kubernetes.io/instance: ibm-block-csi-operator
app.kubernetes.io/managed-by: ibm-block-csi-operator
app.kubernetes.io/name: ibm-block-csi-operator
name: ibm-block-csi
namespace: ibm-csi
spec:
controller:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- s390x
- ppc64le
imagePullPolicy: IfNotPresent
repository: ibmcom/ibm-block-csi-driver-controller
tag: 1.4.0
node:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- s390x
- ppc64le
imagePullPolicy: IfNotPresent
repository: ibmcom/ibm-block-csi-driver-node
tag: 1.4.0
sidecars:
- imagePullPolicy: IfNotPresent
name: csi-node-driver-registrar
repository: k8s.gcr.io/sig-storage/csi-node-driver-registrar
tag: v2.0.1
- imagePullPolicy: IfNotPresent
name: csi-provisioner
repository: k8s.gcr.io/sig-storage/csi-provisioner
tag: v2.0.2
- imagePullPolicy: IfNotPresent
name: csi-attacher
repository: k8s.gcr.io/sig-storage/csi-attacher
tag: v3.0.0
- imagePullPolicy: IfNotPresent
name: csi-snapshotter
repository: k8s.gcr.io/sig-storage/csi-snapshotter
tag: v3.0.0
- imagePullPolicy: IfNotPresent
name: csi-resizer
repository: k8s.gcr.io/sig-storage/csi-resizer
tag: v1.0.0
- imagePullPolicy: IfNotPresent
name: livenessprobe
repository: k8s.gcr.io/sig-storage/livenessprobe
tag: v2.1.0
pvc-test.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-test
namespace: mynamespace
spec:
storageClassName: sc-test
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mypod.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespace
spec:
containers:
- name: myapp
image: alpine
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
imagePullPolicy: IfNotPresent
volumeMounts:
- name: pvc-test
mountPath: "/data"
volumes:
- name: pvc-test
persistentVolumeClaim:
claimName: pvc-test
nodeSelector:
kubernetes.io/hostname: 9.151.161.108
IBM CSI block deployment pods
kubectl get pods -n mynamespace
NAME READY STATUS RESTARTS AGE
ibm-block-csi-controller-0 6/6 Running 0 10m
ibm-block-csi-node-ccxwk 3/3 Running 0 10m
ibm-block-csi-node-cdn7r 3/3 Running 0 10m
ibm-block-csi-node-v29dd 3/3 Running 0 10m
ibm-block-csi-operator-5497498db8-qw4nc 1/1 Running 0 10m
Storage Class
kubectl get sc sc-test
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
sc-test block.csi.ibm.com Delete Immediate false 25h
PVC Created using the above Storage Class
kubectl get pvc -n mynamespace
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-0e4af4ca-2e1f-444a-9cf6-945f098c904b 1Gi RWO sc-test 24s
PV bounded by the above PVC
kubectl get pv -n mynamespace
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0e4af4ca-2e1f-444a-9cf6-945f098c904b 1Gi RWO Delete Bound mynamespace/pvc-test sc-test 14m
POD using the above PVC
kubectl get pod mypod -n mynamespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod 1/1 Running 0 25h 172.30.88.176 9.151.161.108 <none> <none>
POD above description
kubectl describe pod mypod -n mynamespace
Name: mypod
Namespace: mynamespace
Priority: 0
Node: 9.151.161.108/9.151.161.108
Start Time: Mon, 25 Jan 2021 17:28:05 +0200
Labels: <none>
Annotations: cni.projectcalico.org/podIP: 172.30.88.176/32
cni.projectcalico.org/podIPs: 172.30.88.176/32
k8s.v1.cni.cncf.io/network-status:
[{
"name": "k8s-pod-network",
"ips": [
"172.30.88.176"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "k8s-pod-network",
"ips": [
"172.30.88.176"
],
"default": true,
"dns": {}
}]
openshift.io/scc: anyuid
Status: Running
IP: 172.30.88.176
IPs:
IP: 172.30.88.176
Containers:
myapp:
Container ID: cri-o://eb17443838c58cab588c178001e662b89c93a0cdc9c4a46a8fcc77603f6a3bc7
Image: alpine
Image ID: docker.io/library/alpine@sha256:d0710affa17fad5f466a70159cc458227bd25d4afb39514ef662ead3e6c99515
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
while true; do sleep 30; done;
State: Running
Started: Mon, 25 Jan 2021 17:28:32 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data from pvc-test (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t2ff4 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pvc-test:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-test
ReadOnly: false
default-token-t2ff4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-t2ff4
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/hostname=9.151.161.108
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned mynamespace/mypod to 9.151.161.108
Normal SuccessfulAttachVolume 12m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-0e4af4ca-2e1f-444a-9cf6-945f098c904b"
Normal AddedInterface 11m multus Add eth0 [172.30.88.176/32]
Normal Pulling 11m kubelet Pulling image "alpine"
Normal Pulled 11m kubelet Successfully pulled image "alpine"
Normal Created 11m kubelet Created container myapp
Normal Started 11m kubelet Started container myapp
List /tmp folder on the pod's container before writing
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "ls -la /tmp"
total 0
drwxrwxrwt 2 root root 6 Jan 14 11:49 .
drwxr-xr-x 1 root root 40 Jan 25 15:28 ..
Create new folder under /tmp folder
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "mkdir /tmp/demo"
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "ls -la /tmp"
total 0
drwxrwxrwt 1 root root 18 Jan 25 15:31 .
drwxr-xr-x 1 root root 51 Jan 25 15:28 ..
drwxr-xr-x 2 root root 6 Jan 25 15:31 demo
Write some data to a file under the created folder
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "dd if=/dev/urandom of=/tmp/demo/file.txt bs=1048576 count=800"
800+0 records in
800+0 records out
List the contents of the created folder to see the new file
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "ls -la /tmp/demo"
total 819200
drwxr-xr-x 2 root root 22 Jan 25 15:32 .
drwxrwxrwt 1 root root 18 Jan 25 15:31 ..
-rw-r--r-- 1 root root 838860800 Jan 25 15:32 file.txt
Perform a read action on the file
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "md5sum /tmp/demo/file.txt"
43f485495596da38dcaa10f160abb7e7 /tmp/demo/file.txt
Remove the created folder and all its content
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "rm -Rf /tmp/demo"
kubectl exec mypod -n mynamespace -c myapp -- /bin/sh -c "ls -la /tmp/"
total 0
drwxrwxrwt 1 root root 6 Jan 25 15:38 .
drwxr-xr-x 1 root root 51 Jan 25 15:28 ..
body:
{
"config-name": "config-template",
"config-version": "1.4.0",
"source-branch": "testing_storage_class",
"source-org": "ArbelNathan",
"storage-class-parameters": [
{
"name": "sc-test",
"pool": "arbel_pool",
"secret-name": "demo-svc",
"secret-namespace": "default",
"VolumeExpansion": "false"
}
],
"storage-template-name": "ibm-csi-block",
"storage-template-version": "1.4.0",
"user-config-parameters": {
"namespace": "mynamespace"
}
}
after deployment:
kubectl get sc sc-test -o yaml
allowVolumeExpansion: false
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
deploy.razee.io/last-applied-configuration: '{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"name":"sc-test"},"provisioner":"block.csi.ibm.com","parameters":{"pool":"arbel_pool","csi.storage.k8s.io/provisioner-secret-name":"demo-svc","csi.storage.k8s.io/provisioner-secret-namespace":"default","csi.storage.k8s.io/controller-publish-secret-name":"demo-svc","csi.storage.k8s.io/controller-publish-secret-namespace":"default","csi.storage.k8s.io/controller-expand-secret-name":"demo-svc","csi.storage.k8s.io/controller-expand-secret-namespace":"default","csi.storage.k8s.io/fstype":"ext4"},"allowVolumeExpansion":false}'
creationTimestamp: "2021-03-02T13:44:09Z"
managedFields:
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:allowVolumeExpansion: {}
f:metadata:
f:annotations:
.: {}
f:deploy.razee.io/last-applied-configuration: {}
f:parameters:
.: {}
f:csi.storage.k8s.io/controller-expand-secret-name: {}
f:csi.storage.k8s.io/controller-expand-secret-namespace: {}
f:csi.storage.k8s.io/controller-publish-secret-name: {}
f:csi.storage.k8s.io/controller-publish-secret-namespace: {}
f:csi.storage.k8s.io/fstype: {}
f:csi.storage.k8s.io/provisioner-secret-name: {}
f:csi.storage.k8s.io/provisioner-secret-namespace: {}
f:pool: {}
f:provisioner: {}
f:reclaimPolicy: {}
f:volumeBindingMode: {}
manager: unknown
operation: Update
time: "2021-03-02T13:44:09Z"
name: sc-test
resourceVersion: "1561428"
selfLink: /apis/storage.k8s.io/v1/storageclasses/sc-test
uid: 1b960d11-ade0-4d5a-a7ca-fbe0ff35ca93
parameters:
csi.storage.k8s.io/controller-expand-secret-name: demo-svc
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/controller-publish-secret-name: demo-svc
csi.storage.k8s.io/controller-publish-secret-namespace: default
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-name: demo-svc
csi.storage.k8s.io/provisioner-secret-namespace: default
pool: arbel_pool
provisioner: block.csi.ibm.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
IBM block storage CSI driver Description: IBM block storage CSI driver is leveraged by Kubernetes persistent volumes (PVs) to dynamically provision for block storage used with stateful containers. IBM block storage CSI driver is based on an open-source IBM project (CSI driver), included as a part of IBM storage orchestration for containers. IBM storage orchestration for containers enables enterprises to implement a modern container-driven hybrid multicloud environment that can reduce IT costs and enhance business agility, while continuing to derive value from existing systems. By leveraging CSI (Container Storage Interface) drivers for IBM storage systems, Kubernetes persistent volumes (PVs) can be dynamically provisioned for block or file storage to be used with stateful containers, such as database applications (IBM Db2®, MongoDB, PostgreSQL, etc) running in Red Hat® OpenShift® Container Platform and/or Kubernetes clusters. Storage provisioning can be fully automatized with additional support of cluster orchestration systems to automatically deploy, scale, and manage containerized applications. For further details about the storage solution refer to - https://www.ibm.com/support/knowledgecenter/SSRQ8T_1.4.0/csi_block_storage_kc_welcome.html