Open ruiarodrigues opened 5 months ago
Thank you for the report. What is the underlying operating system you are running minikube on? There are some known issues with unmounting on unsupported operating systems (supported versions list is here). Currently we don't test on minikube so it is not officially supported, but I would like to understand more about the root cause here.
My setup is:
Do you expect any problem if Kubernetes is running on RHEL 8?
The deployment is exactly the one from the static provisioning example. The PV section is this one. Using a local MinIO instance. Everything works fine. I'm able to see all the files in the bucket. Only the unmount is not working.
apiVersion: v1
kind: PersistentVolume
metadata:
name: s3-pv
spec:
capacity:
storage: 1Gi # ignored, required
accessModes:
- ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
mountOptions:
- allow-delete
- region Lisbon
- force-path-style
- endpoint-url http://192.168.5.2:9000
csi:
driver: s3.csi.aws.com # required
volumeHandle: s3-csi-driver-volume
volumeAttributes:
bucketName: s3.select
---
I see the same issue on RHEL8 using a k8s cluster installed via kubeadm. A work-around is discussed here: https://unix.stackexchange.com/questions/512067/how-to-get-mount-information-of-host-inside-a-docker-container
Instead of mounting /proc/mounts, try mounting /proc/1/mounts. On RHEL8, this will show the mounts on the host and let the CSI driver clean up properly. Note that if you have SELinux in enforcing mode, then you'll need to add some allow rules.
We are facing similar issue, volume is not detected as mounted, skipping unmount by s3-csi-node. When delete pod, pod is hanging on Terminating state.
Environment:
We are facing the same issue Kubernetes Version: v1.29.0+k3s1 OS: Ubuntu 22.04.4 LTS Driver: aws-mountpoint-s3-csi-driver:v1.3.1
@numarco and @psavva had the same issue on Ubuntu22.04+k3s. We were able to solve it using /proc/1/mounts as proc-mount hostPath. (/proc/mounts symlinks to /proc/self/mounts which looksup mounts for the current process (PID) and not the host overall, AFAIK)
See #191 for more information.
Same issue on Ubuntu 20.04 and Ubuntu 22.04 as hosts which are theoretically supported in "Distros Support Matrix". We observed what @mmoscher clearly described above. And #191 solved the problem for us. I hope that solution is included in future releases.
We've merged #191 which allows alternative values for location /proc/mounts
and this will make it in the next CSI driver release.
/kind bug
What happened? I'm using Minikube locally on my laptop and deployed a pod that uses a mounted bucket using the driver. When I delete the pod (and the PV and PVC), the Kubernetes command get stuck.
In the logs of the driver, I can see that the bucket is mounted correctly but then the unmount with the same name fails stating that it's not mounted. But actually it is because I could access the files inside the bucket. If I remove the volumes mount from the yaml file, everything works fine.
Any suggestion to show more information about what is wrong?
What you expected to happen? The bucket is unmounted
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
Environment
kubectl version
): Client Version: v1.29.0 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.3