flatcar / Flatcar

Flatcar project repository for issue tracking, project documentation, etc.
https://www.flatcar.org/
Apache License 2.0
702 stars 30 forks source link

NFSv4.2 is broken across different hosts #1565

Open MichaelEischer opened 6 days ago

MichaelEischer commented 6 days ago

Description

With flatcar 3975.2.1 we see very weird behavior of NFS 4.2 where one pod writes a file but a pod on a different host is unable to see the just written file content.

NFS 3 / 4.1 works as expected. (Haven't tested 4.0). Flatcar 3815.2.5 is also unaffected.

Impact

NFS 4.2 mount is unusable.

Environment and steps to reproduce

  1. Set-up:
    • at least two nodes in a k8s cluster running flatcar 3975.2.1
    • Setup nfs-ganesh:
      helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/
      helm install my-release nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner
Update mount options in `StorageClass` `nfs` ``` allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: meta.helm.sh/release-name: my-release meta.helm.sh/release-namespace: default labels: app: nfs-server-provisioner app.kubernetes.io/managed-by: Helm chart: nfs-server-provisioner-1.8.0 heritage: Helm release: my-release name: nfs mountOptions: - hard - retrans=3 - proto=tcp - nfsvers=4.2 - rsize=4096 - wsize=4096 - noatime - nodiratime provisioner: cluster.local/my-release-nfs-server-provisioner reclaimPolicy: Delete volumeBindingMode: Immediate ```
create pvc ``` kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-dynamic-volume-claim spec: storageClassName: "nfs" accessModes: - ReadWriteMany resources: requests: storage: 100Mi ```
create pods (must be executed on different hosts) ``` apiVersion: v1 kind: Pod metadata: name: test-pod-1 labels: app: nginx spec: containers: - name: test image: nginx volumeMounts: - name: config mountPath: /test volumes: - name: config persistentVolumeClaim: claimName: test-dynamic-volume-claim affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: nginx topologyKey: "kubernetes.io/hostname" --- apiVersion: v1 kind: Pod metadata: name: test-pod-2 labels: app: nginx spec: containers: - name: test image: nginx volumeMounts: - name: config mountPath: /test volumes: - name: config persistentVolumeClaim: claimName: test-dynamic-volume-claim affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: nginx topologyKey: "kubernetes.io/hostname" ```
  1. Action(s): a. kubectl exec -it test-pod-1 -- bash -c 'echo "def" > /test/testfile' b. kubectl exec -it test-pod-2 -- bash -c 'cat /test/testfile'
  2. Error: The call to cat should return "def", but returns nothing. Note that both pods see accurate metadata (using ls -la /test) for the file

Expected behavior

cat from test-pod-2 should be able to read the just written file content. Note that test-pod-1 is able to read the file contents.

MichaelEischer commented 6 days ago

Just repeated the test with 3975.2.2 and it is also affected.

Edit: https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.6.55 contains a few NFS related fixes, but it's unclear to me whether that would resolve this issue.

ader1990 commented 5 days ago

Hello, I could reproduce this behaviour on a two node ARM64 Flatcar latest alpha (4116.0.0) env.

I think the issue is related to a host problem as the files from the secondary host are empty as shown from the host perspective:

# from k8s node 2
cat /var/lib/kubelet/pods/2cf32713-9a7a-412e-b4fc-998741deb125/volumes/kubernetes.io~nfs/pvc-8f47140d-1164-42a3-816f-05b41a9633c9/test

# empty output
ader1990 commented 5 days ago

Flatcar main with kernel 6.6.56 is also affected.

ader1990 commented 4 days ago

Tested with Flatcar using kernel 6.10.9 and the issue is present there too, this seems to be a Linux kernel regression. Or a tooling / containerd issue - needs debugging to repro this case outside of k8s first and to better pin-point the actual cause.

ader1990 commented 3 days ago

https://github.com/torvalds/linux/commit/9cf2744d249144fc0fe17667b56da78216678378#diff-a24af2ce5442597efe8051684905db2be615f41703247fbce9a446e77f2e9587R214 -> from the linux tree, this is the only thing I see it has changed that might affect NFS 6.6 or 6.10 vs previous ones.

MichaelEischer commented 3 days ago

I'm wondering whether the underlying issue might be a bug in the ganesha NFS server that is now exposed by the read_plus default change.

Edit: I did some additional testing and the output of cat /test/testfile is actually not empty, but rather consists only of null bytes with the expected length.

ader1990 commented 1 day ago

I'm wondering whether the underlying issue might be a bug in the ganesha NFS server that is now exposed by the read_plus default change.

Edit: I did some additional testing and the output of cat /test/testfile is actually not empty, but rather consists only of null bytes with the expected length.

I am trying now to build a kernel with the read_plus disabled, let's see how that goes.

tormath1 commented 1 day ago

We might be interested to update our NFS test then to catch further regressions like this. (https://github.com/flatcar/mantle/blob/02348d65a5f9bd72f3e7412da54a688b7f972790/kola/tests/kubeadm/kubeadm.go#L237)

ader1990 commented 1 day ago

Tested with NFS_V4_2_READ_PLUS=n and the issue got solved. This is an upstream issue - kernel or nfs implementation and needs to be properly reported, any idea where is best to have it reported?

jepio commented 1 day ago

Normally https://lore.kernel.org/linux-nfs and or the upstream for the server implementation but... the nfs-ganesha-server-and-external-provisioner repo (https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner) is still on Ganesha V4.0.8 whereas upstream just released V6. So there is some reason to think that this might be fixed in newer versions.

ader1990 commented 1 day ago

https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner/pull/152 -> there is a PR to update the chart to use Ganesha v6.

ader1990 commented 1 day ago

https://github.com/nfs-ganesha/nfs-ganesha/commit/24da5c30429bb1ee0bfde3644ab5c3b84daa778e#diff-d4e3191eebe00b04019cafa02691fef13becc8cb3cc098ae6c177653cea40561R776 -> this commit is the best candidate to have a fix for this issue.