Closed Slayug closed 3 years ago
I have to admit that I'm not well versed in Kubernetes, but I'll do my best to assist in getting things working.
mount -v 10.0.2.2:/volume ./foo
Not sure if your "access denied" error message came from k8s or not.docker logs ...
/tmp/toast
? ls -al /tmp/toast
Recently another user experienced some weirdness when serving from their host's /tmp
directory, so perhaps try changing it from /tmp/toast
to /home/slayug
, for instance, to see if that changes anything? You have fsid=0
in the export, so I'm doubtful this is related.
Thanks for your help, So I tried to mount from a pod who works:
# mount -vvv 10.0.2.2:/volume ./foo
mount.nfs: timeout set for Thu Feb 7 09:12:09 2019
mount.nfs: trying text-based options 'vers=4.2,addr=10.0.2.2,clientaddr=172.17.0.10'
mount.nfs: mount(2): Operation not permitted
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
my logs from the docker:
Attaching to dev_nfs_1
nfs_1 |
nfs_1 | ==================================================================
nfs_1 | SETTING UP ...
nfs_1 | ==================================================================
nfs_1 | ----> building /etc/exports from environment variables
nfs_1 | ----> collected 1 valid export(s) from NFS_EXPORT_* environment variables
nfs_1 | ----> kernel module nfs is loaded
nfs_1 | ----> kernel module nfsd is loaded
nfs_1 | ----> setup complete
nfs_1 |
nfs_1 | ==================================================================
nfs_1 | STARTING SERVICES ...
nfs_1 | ==================================================================
nfs_1 | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs_1 | ----> mounting nfsd filesystem onto /proc/fs/nfsd
nfs_1 | ----> starting rpcbind
nfs_1 | ----> exporting filesystem(s)
nfs_1 | exporting *:/volume
nfs_1 | ----> starting rpc.mountd on port 32767
nfs_1 | ----> starting statd on port 32765 (outgoing connections from port 32766)
nfs_1 | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
nfs_1 | rpc.nfsd: knfsd is currently down
nfs_1 | rpc.nfsd: Writing version string to kernel: -2 +3 +4 +4.1 +4.2
nfs_1 | rpc.nfsd: Created AF_INET TCP socket.
nfs_1 | rpc.nfsd: Created AF_INET UDP socket.
nfs_1 | rpc.nfsd: Created AF_INET6 TCP socket.
nfs_1 | rpc.nfsd: Created AF_INET6 UDP socket.
nfs_1 | ----> all services started normally
nfs_1 |
nfs_1 | ==================================================================
nfs_1 | SERVER STARTUP COMPLETE
nfs_1 | ==================================================================
nfs_1 | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
nfs_1 | ----> list of container exports:
nfs_1 | ----> /volume *(rw,no_root_squash,no_subtree_check,fsid=0)
nfs_1 | ----> list of container ports that should be exposed:
nfs_1 | ----> 111 (TCP and UDP)
nfs_1 | ----> 2049 (TCP and UDP)
nfs_1 | ----> 32765 (TCP and UDP)
nfs_1 | ----> 32767 (TCP and UDP)
nfs_1 |
nfs_1 | ==================================================================
nfs_1 | READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs_1 | ==================================================================
Even when it 'works' with the next part I don't have more logs.
I put the folder outside of tmp with your link, and this comment: https://github.com/ehough/docker-nfs-server/issues/2#issuecomment-459776530
- NFS_EXPORT_0=/volume *(rw,no_root_squash,no_subtree_check,fsid=0)
And I have no error when pod starts, but nothing is written into the volume.. Into my pod i have files:
root@db-statefulset-0:/# ls /var/lib/postgresql/data -al
total 132
drwx------ 20 postgres root 4096 Feb 7 10:03 .
drwxr-xr-x 1 postgres postgres 4096 Feb 6 08:17 ..
drwx------ 5 postgres 70 4096 Feb 7 09:45 base
drwxr-xr-x 2 postgres root 4096 Feb 7 09:52 ezez
drwx------ 2 postgres 70 4096 Feb 7 10:05 global
...
-rw------- 1 postgres 70 88 Feb 7 09:45 postgresql.auto.conf
-rw------- 1 postgres 70 22729 Feb 7 09:45 postgresql.conf
-rw------- 1 postgres 70 36 Feb 7 10:03 postmaster.opts
-rw------- 1 postgres postgres 94 Feb 7 10:03 postmaster.pid
When I go into the nfs docker I have nothing into /volume, I tried to create a directory ooop
and for it I can see it from my host folder:
ls /volume/ -al
total 16
drwxr-xr-x 3 root root 4096 7 févr. 11:05 ./
drwxr-xr-x 19 root root 4096 7 févr. 10:39 ../
drwxr-xr-x 2 root root 4096 7 févr. 11:05 ooop/
I tried also to write with root from my pod a directory into the mount path /var/lib/postgresql/data
drwxr-xr-x 2 root root 4096 Feb 7 10:15 fluuuuuuuuuuuuuuuute
but I still have nothing into the host dir..
The server logs look totally normal.
mount.nfs: mount(2): Operation not permitted
This tells me that the pod's kernel isn't allowing the mount to happen. I see that you have privileged
in your spec, but let's verify that we have the right capabilities.
From your pod, try this:
apk add --update libcap
capsh --print
To be able to perform a mount of any kind, you'll need to see cap_sys_admin
in the output of capsh --print
. e.g.
capsh --print | grep -E "^Current: = .*,?cap_sys_admin(,|$)"
Do you see cap_sys_admin
?
I don't get it why yesterday i had no error when running the pod.. access denied came back this morning.
Anyway I still have the problem, so I installed libcap, and first the pod did not have cap_sys_admin
,
so I added with
securityContext:
capabilities:
add:
- SYS_ADMIN
which give me:
capsh --print | grep -E "^Current: = .*,?cap_sys_admin(,|$)"
Current: = cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,**cap_sys_admin**,cap_mknod,cap_audit_write,cap_setfcap+eip
but I cannot still do anything from the pod
# mount -v 10.0.2.2:/volume ./foor
mount.nfs: timeout set for Fri Feb 8 10:34:58 2019
mount.nfs: trying text-based options 'vers=4.2,addr=10.0.2.2,clientaddr=172.17.0.2'
mount.nfs: mount(2): Operation not permitted
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
and I still have acces denied when I try to add the nfs volume
I had the same problem. I discoverd that the server is the source of the problem. It cant mount it self
bash-4.4# mount -t nfs4 127.0.0.1:/export /mnt mount.nfs4: mounting 127.0.0.1:/export failed, reason given by server: No such file or directory mount: mounting 127.0.0.1:/export on /mnt failed: No such file or directory
V3 works fine bash-4.4# mount -t nfs 127.0.0.1:/export /mnt bash-4.4# df /mnt Filesystem 1K-blocks Used Available Use% Mounted on 127.0.0.1:/export 51344384 53248 51274752 0% /mnt
I have been experimenting with my own container (not docker-nfs-server), and found this:
$ sudo mount -o nfsvers=3 localhost:/nfs3/share /tmp/mount
mount.nfs: access denied by server while mounting localhost:/nfs3/share
$ sudo mount -o nfsvers=3 127.0.0.1:/nfs3/share /tmp/mount
mount.nfs: access denied by server while mounting 127.0.0.1:/nfs3/share
$ sudo mount -o nfsvers=3 172.16.111.222:/nfs3/share /tmp/mount
That is, it worked when I specified the host machine's IP address, but not when I specified localhost
. Just posting this observation here in case it's useful to anyone.
If anyone come across the same problem trying to reach the nfs server from a pod, I didn't use the right ip.
Instead of 10.0.2.2
I had to use 192.168.99.1
I have been experimenting with my own container (not docker-nfs-server), and found this:
$ sudo mount -o nfsvers=3 localhost:/nfs3/share /tmp/mount mount.nfs: access denied by server while mounting localhost:/nfs3/share $ sudo mount -o nfsvers=3 127.0.0.1:/nfs3/share /tmp/mount mount.nfs: access denied by server while mounting 127.0.0.1:/nfs3/share $ sudo mount -o nfsvers=3 172.16.111.222:/nfs3/share /tmp/mount
That is, it worked when I specified the host machine's IP address, but not when I specified
localhost
. Just posting this observation here in case it's useful to anyone.
Thanks @DouglasRoyds, I've been struggling for hours to find your message.
In my case, using NFSv4 with insecure
flag in export expression makes a success mount with localhost
.
Hello, i'm trying to build the same environment I have in production with a nfs distant server, but in local, which means I have my cluster with my pods within minikube, and it should access to the nfs docker on my host machine.
So for instance I have the following pod:
Yes I know the nfs is not the best choice for database, but this is not the final usage, and this is not the question.
my volumes:
And I have this error on my pod:
My docker compose for the nfs server:
When i try showmount when I exec on my pod I have this:
Thanks for any help, hope i didn't forget something obvious..