kubernetes / examples

Kubernetes application example tutorials
Apache License 2.0
6.21k stars 4.54k forks source link

NFS example with a cluster local service name only works on GKE but not for minikube/kubeadm #390

Closed ensonic closed 3 years ago

ensonic commented 4 years ago

This might be a followup for #237. The change from IP to server: nfs-server.default.svc.cluster.local in https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml#L11 seems to be not well tested. It works on GKE through some black magic, but neither on minikube or kubeadm clusters. The main problem is that the controller is not resolving the IP address against core-dns before building the mount command line that will be run on the node's host system. But there is more. Even after I patched the hosts resolver so that tools like dig, host, nsloopup can resolve the cluster local name, /sbin/mount.nfs still fails.

Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7ecbc79e-2228-4447-9667-ef5f590d473d/volumes/kubernetes.io~nfs/nfs --scope -- mount -t nfs nfs-server.default.svc.cluster.local:/exports /var/lib/kubelet/pods/7ecbc79e-2228-4447-9667-ef5f590d473d/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit: run-rf00d34506d784765bb629f16c2941921.scope
mount.nfs: Failed to resolve server nfs-server.default.svc.cluster.local: Name or service not known

To be clear, no one wants to use a fixed IP address, so using the service name is a great change, but it also need to actually work :)

ensonic commented 4 years ago

Using a csi-driver, such as https://github.com/kubernetes-csi/csi-driver-nfs will work with an in-cluster nfs server.

saiyam1712 commented 4 years ago

Resolving through service name would be a great change. Can it be taken into consideration asap?

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/examples/issues/390#issuecomment-759739620): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
vikramparth commented 3 years ago

I'm also running into this issue with docker desktop. Any workarounds here? Thanks.