Closed ensonic closed 3 years ago
Using a csi-driver, such as https://github.com/kubernetes-csi/csi-driver-nfs will work with an in-cluster nfs server.
Resolving through service name would be a great change. Can it be taken into consideration asap?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
I'm also running into this issue with docker desktop. Any workarounds here? Thanks.
This might be a followup for #237. The change from IP to
server: nfs-server.default.svc.cluster.local
in https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml#L11 seems to be not well tested. It works on GKE through some black magic, but neither on minikube or kubeadm clusters. The main problem is that the controller is not resolving the IP address against core-dns before building the mount command line that will be run on the node's host system. But there is more. Even after I patched the hosts resolver so that tools like dig, host, nsloopup can resolve the cluster local name, /sbin/mount.nfs still fails.To be clear, no one wants to use a fixed IP address, so using the service name is a great change, but it also need to actually work :)