ehough / docker-nfs-server

A lightweight, robust, flexible, and containerized NFS server.
https://hub.docker.com/r/erichough/nfs-server/
GNU General Public License v3.0
672 stars 221 forks source link

nfsv4 not working #24

Closed RyanEwen closed 5 years ago

RyanEwen commented 5 years ago

I don't seem to be able to mount my share if I disable nfsv3 support in the container.

sudo mount -t nfs4 dev.ryan-ewen.com:/test /home/rewen/test

If I leave nfsv3 support enabled, I still cannot mount my share using that command, unless the nfs server has all 4 ports forwarded. It may be using nfs3 even though I specify nfs4? Not sure.

ehough commented 5 years ago

NFSv4 can be a little tricky, but we'll figure it out.

First, please add -vvv to your mount command and post the output. i.e.

sudo mount -vvv -t nfs4 dev.ryan-ewen.com:/test /home/rewen/test

That will tell you which protocol version is actually being used, along with clues as to why v4 isn't playing nice. I think, though am not 100% sure, that even if you explicitly specify -t nfs4 then mount will try v4 then fall back to v3. That could explain why things work for you as long as v3 is still enabled.

Second, could you post the output of your server logs with it debug mode enabled? Either here or on pastebin.

Last, would you post your full docker run command or docker-compose.yml?

All this should give me a good idea of what's happening on both the server and client.

RyanEwen commented 5 years ago
mount.nfs4: timeout set for Mon Apr  1 10:48:22 2019
mount.nfs4: trying text-based options 'vers=4.2,addr=1.2.3.4,clientaddr=192.168.86.5'
mount.nfs4: mount(2): No such file or directory
mount.nfs4: trying text-based options 'addr=1.2.3.4'
mount.nfs4: prog 100003, trying vers=3, prot=6
mount.nfs4: portmap query retrying: RPC: Unable to receive
mount.nfs4: prog 100003, trying vers=3, prot=17
mount.nfs4: portmap query failed: RPC: Unable to receive - Connection refused

(replaced server IP with 1.2.3.4)

ehough commented 5 years ago

Thanks. So that tells us that indeed mount falls back to NFSv3 if v4 fails and NFSv4 is failing due to:

mount.nfs4: mount(2): No such file or directory

What are the contents of your /etc/exports? You can find it in the container logs (whether or not debug mode is enabled).

RyanEwen commented 5 years ago

I tried simply /downloads * and currently have /downloads *(fsid=0,rw,sync)

FYI

nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       SETTING UP ...
nfs_1  | ==================================================================
nfs_1  | ----> setup complete
nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       STARTING SERVICES ...
nfs_1  | ==================================================================
nfs_1  | ----> starting rpcbind
nfs_1  | ----> starting exportfs
nfs_1  | exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/downloads".
nfs_1  |   Assuming default behaviour ('no_subtree_check').
nfs_1  |   NOTE: this default has changed since nfs-utils version 1.0.x
nfs_1  | 
nfs_1  | ----> starting rpc.mountd on port 32767
nfs_1  | ----> starting rpc.nfsd on port 2049 with 2 server thread(s)
nfs_1  | ----> terminating rpcbind
nfs_1  | ----> all services started normally
nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       SERVER STARTUP COMPLETE
nfs_1  | ==================================================================
nfs_1  | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4
nfs_1  | ----> list of container exports:
nfs_1  | ---->   /downloads *(fsid=0,rw,sync)
nfs_1  | ----> list of container ports that should be exposed: 2049 (TCP)
nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs_1  | ==================================================================
RyanEwen commented 5 years ago

I tried some other options in my exports file. Currently have /downloads *(rw,nohide,insecure,no_subtree_check,sync)

And I enabled more logging for your container which printed this (note that there's never anything logged upon me trying to mount from another machine):

nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       SETTING UP ...
nfs_1  | ==================================================================
nfs_1  | ----> log level set to DEBUG
nfs_1  | ----> will use 2 rpc.nfsd server thread(s) (1 thread per CPU)
nfs_1  | ----> /etc/exports is bind-mounted
nfs_1  | ----> kernel module nfs is loaded
nfs_1  | ----> kernel module nfsd is loaded
nfs_1  | ----> setup complete
nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       STARTING SERVICES ...
nfs_1  | ==================================================================
nfs_1  | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs_1  | ----> mounting nfsd filesystem onto /proc/fs/nfsd
nfs_1  | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
nfs_1  | mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
nfs_1  | ----> starting rpcbind
nfs_1  | ----> starting exportfs
nfs_1  | exporting *:/downloads
nfs_1  | ----> starting rpc.mountd on port 32767
nfs_1  | ----> starting rpc.nfsd on port 2049 with 2 server thread(s)
nfs_1  | rpc.nfsd: knfsd is currently down
nfs_1  | rpc.nfsd: Writing version string to kernel: -2 -3 +4 +4.1 +4.2
nfs_1  | rpc.nfsd: Created AF_INET TCP socket.
nfs_1  | rpc.nfsd: Created AF_INET UDP socket.
nfs_1  | rpc.nfsd: Created AF_INET6 TCP socket.
nfs_1  | rpc.nfsd: Created AF_INET6 UDP socket.
nfs_1  | ----> terminating rpcbind
nfs_1  | ----> all services started normally
nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       SERVER STARTUP COMPLETE
nfs_1  | ==================================================================
nfs_1  | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4
nfs_1  | ----> list of container exports:
nfs_1  | ---->   /downloads     *(rw,sync,wdelay,nohide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
nfs_1  | ----> list of container ports that should be exposed: 2049 (TCP)
nfs_1  | 
nfs_1  | ==================================================================
nfs_1  |       READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs_1  | ==================================================================
RyanEwen commented 5 years ago

Got it!

It appears to have nothing to do with your container or docker, but nfsv4 working differently that prior versions.

It seems we have to specify fsid=0 for one of the exports (which I did try before) but the trick is that when mounting from the client machine, we have to pretend that one is / on the server.

/downloads *(fsid=0) gets mounted like mount -t nfs4 my-server:/ /some/local/path

ehough commented 5 years ago

Woot!

The fsid stuff was where I was headed next, had you not returned to it already. Usually the server doesn't need you to explicitly set fsid, which is why I don't mention it in the docs. Once it's set (i.e. fsid=0), I've noticed that it causes the confusion and frustration that you just experienced.

Is /downloads bind-mounted to your container? Or is it mounted to the container in any other unusual way? I'm just trying to figure out why you would need to add fsid in the first place.

RyanEwen commented 5 years ago

I probably don't need fsid now that you mention it. Key was just to not specify the full export in the mount command

shinebayar-g commented 4 years ago

Hi, @RyanEwen thanks for pointing out we have to mount like mount -t nfs4 my-server:/ /some/local/path when mounting as NFSv4. In this case what if we have 2 entries on /etc/exports?

Edit: It just mounts first entry in /etc/exports ..

I probably don't need fsid now that you mention it. Key was just to not specify the full export in the mount command

I found out that if I remove fsid=0 from /etc/exports, it gives back the intial error again.

$: mount -t nfs4 127.0.0.1:/ /testing
mount.nfs4: mounting 127.0.0.1:/ failed, reason given by server: No such file or directory
mount: mounting 127.0.0.1:/ on /testing failed: No such file or directory