Open dienhartd opened 1 week ago
Thanks for opening the bug report, @dienhartd. We'll investigate further.
Would you be able to review dmesg
on the host and see if there are any error messages at the time of the issue, and share them if so? In particular, any error messages related to opening of /host/proc/mounts
would be of interest.
Please can you let us know what operating system you're running on the cluster nodes too!
/kind bug
What happened? Periodically without warning one of my s3 mountpoint driver pods will crash with GRPC errors until I delete it. It will usually cause a dependent pod to fail to start. The replacement immediately after this pod's deletion works fine, but requires manual intervention after noticing dependent pod crashes due to missing pv.
What you expected to happen? Error not to occur.
How to reproduce it (as minimally and precisely as possible)? Unclear.
Anything else we need to know?: Logs
Environment
Kubernetes version (use
kubectl version
): Client Version: v1.31.1 Server Version: v1.30.5-eks-ce1d5ebDriver version: v1.9.0 Installation of s3 mountpoint driver is through
eksctl
, i.e.eksctl create addon aws-mountpoint-s3-csi-driver
Was directed by @muddyfish to file this issue here: https://github.com/awslabs/mountpoint-s3-csi-driver/issues/174#issuecomment-2443935264