Open keskival opened 1 year ago
Investigated further and it is actually the Dynamic NFS pod ephemeral storage where the data gets held up without passing it to Jiva at all. Sorry about that, closing.
Reopening this issue because it seems to do with Jiva side after all.
Tested with a minimum example with a Jiva replication set up, and the problem happens even without Dynamic NFS Provisioner. It seems to be CSI, OpenEBS or Jiva issue, possibly I have misunderstood something.
Applying these resources and then using kubectl exec -it -n test test -- dd if=/dev/random of=/mnt/test/sample.txt bs=1G count=1
, doesn't put this data to the replicas, but it stays in the pod ephemeral storage (just like described before for the Dynamic NFS Provisioner case).
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath2
parameters:
pvDir: /var/openebs/local2
provisioner: microk8s.io/hostpath
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
annotations:
meta.helm.sh/release-name: openebs
meta.helm.sh/release-namespace: openebs
labels:
app.kubernetes.io/managed-by: Helm
name: openebs-jiva-default-policy2
namespace: openebs
spec:
replicaSC: hostpath2
target:
replicationFactor: 3
---
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
meta.helm.sh/release-name: openebs
meta.helm.sh/release-namespace: openebs
labels:
app.kubernetes.io/managed-by: Helm
name: jiva-hostpath
parameters:
cas-type: jiva
policy: openebs-jiva-default-policy2
provisioner: jiva.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: test
name: testpvc2
spec:
storageClassName: jiva-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
---
apiVersion: v1
kind: Pod
metadata:
name: test
namespace: test
spec:
volumes:
- name: test
persistentVolumeClaim:
claimName: testpvc2
containers:
- name: testcontainer
image: ubuntu
command:
- "sleep"
- "86400"
volumeMounts:
- mountPath: "/mnt/test"
name: test
This mount also seems to prevent the pod deletion, except using --force
flag.
I installed OpenEBS Jiva separately, I now notice that installing OpenEBS separately adds some pods named openebs-ndm*
. I didn't have those, I wonder if it makes a difference. Edit: No, that made no difference.
Even with a clean installation of a single node MicroK8S cluster with OpenEBS Jiva installed with Helm to an empty cluster, set with replicationFactor=1, writing a gigabyte file to a PVC mount doesn't actually go to the replica:
tero@hubble:/var/openebs/local/pvc-f2dee082-14ec-44d1-b6cb-ac689ceb576f$ du --si -s *
4,1k log.info
13k replica.log
4,1k revision.counter
70M volume-head-000.img
4,1k volume-head-000.img.meta
4,1k volume.meta
No errors, no idea what is going on here.
Got it working, by installing Jiva with:
microk8s addons repo add community https://github.com/canonical/microk8s-community-addons --reference main
microk8s enable community/openebs
I have no clue what the difference is. The hostpath points to /var/snap/microk8s/common/var/openebs/local/
, like it doesn't when the hostpath storage class is installed from the vanilla Helm chart, so maybe snap isolation of microk8s is the issue... Probably it's not that because the system is well able to write some files to /var/openebs/local
as well even with vanilla charts.
Other differences:
openebs-jiva-csi-default
volumeBindingMode
is set as Immediate
instead of WaitingFirstConsumer
for the MicroK8S add-on. Perhaps the volume existing when the mounting pod starts makes all the difference. Edit: No, that didn't make a difference.
What steps did you take and what happened:
I am using Dynamic NFS Provisioner on top of triple replicated Jiva volumes to make them ReadWriteMany.
After hard reboots, this seems to always happen:
It seems to me the data stored to the Jiva volumes do not go to the replicas, but instead go to the mounting pod local ephemeral storage.
I would expect the data to go to the replicas which are correctly set up, but instead Jiva stores the files to:
/var/snap/microk8s/common/var/lib/kubelet/pods/4ae2b03c-e03e-4088-92cb-dcf37ae2e118/volumes/kubernetes.io~csi/pvc-4f872aeb-385e-44e0-b0f1-9643a04ea4af/mount
Which doesn't seem to be mounted anywhere.
What did you expect to happen:
I would expect either an error in OpenEBS Jiva pvc pod start-up, or the Jiva replicas to be mounted to the mount directory using CSI/iSCSI so the data ends up on the replicas.
The output of the following commands will help us better understand what's going on:
Environment:
kubectl version
):Using MicroK8S v1.25.5
/etc/os-release
):Ubuntu 22.04.1 LTS