Closed fragolinux closed 12 months ago
more details... going inside 1 of the openebs nfs-pvc pods, i get this:
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # exportfs
/nfsshare <world>
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # cat /etc/exports
/nfsshare *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # mkdir -p t
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # mount -t nfs 127.0.0.1:/nfsshare t
mount.nfs: mounting 127.0.0.1:/nfsshare failed, reason given by server: No such file or directory
mount: mounting 127.0.0.1:/nfsshare on t failed: Not supported
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # ls /
Dockerfile README.md bin dev etc home lib media mnt nfsshare opt proc root run sbin srv sys usr var
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ # ls /nfsshare/
root@nfs-pvc-e98ed6ad-f078-4b24-8cf4-687ede0c872a-5fc88768cb-mwtk7:~ #
following this guide i have now working writing and reading pods, thanks to mountOptions vers: 4.1 parameter... but, how to set this using the values in a helmrelease? Don't see a section in templates...
Submitted PR to fix this thing, it bothered me too on k3d local cluster.
@pentago thanks! Hope pr will be merged soon
Describe the bug: A clear and concise description of what the bug is. installed openebs on top of k3d 5.4.4 as an helmrelease using the nfs-provisioner chart, with this values (indentation is because of the helmrelease yaml, this block is under spec.values):
I then create my deployments (always via helm releases) which request a couple of pvc using the given storageclass name "nfs", i can see the pvc correctly created, while they can't be then mounted from the requesting pods, with these errors in event logs:
10.43.x.x is the service network, pods are on the 10.42.x.x network...
Expected behaviour: pods starting with nfs pvc mounted
The output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
kubectl get pvc -n <openebs_namespace>
kubectl get pvc -n <application_namespace>
Anything else we need to know?: Add any other context about the problem here.
Environment details:
OpenEBS version (use
kubectl get po -n openebs --show-labels
):Kubernetes version (use
kubectl version
):Cloud provider or hardware configuration: k3d 5.4.4
OS (e.g:
cat /etc/os-release
): macos bigsur latest