Closed d0zingcat closed 1 year ago
I'm getting an error when I try mounting a volume with s3fs. Do you have the same error?
I0421 12:59:46.375923 1 mounter.go:66] Mounting fuse with command: s3fs and args: [pvc-c86e8e4d-e422-4c57-8130-e9ced69b0420:/csi-fs /var/lib/kubelet/pods/b9907a71-51fa-4e67-bbca-51f064f711a4/volumes/kubernetes.io~csi/pvc-c86e8e4d-e422-4c57-8130-e9ced69b0420/mount -o use_path_request_style -o url=https://xxx.s3.eu-central-1.amazonaws.com/ -o endpoint= -o allow_other -o mp_umask=000]
E0421 12:59:46.434786 1 utils.go:101] GRPC error: stat /var/lib/kubelet/pods/b9907a71-51fa-4e67-bbca-51f064f711a4/volumes/kubernetes.io~csi/pvc-c86e8e4d-e422-4c57-8130-e9ced69b0420/mount: software caused connection abort
sorry man, I use rclone to mount and everything looks good, maybe you should give rclone a try.
The problem with rclone is already mentioned in another issues, I can't write in the S3 bucket,. Is there any solution for that?
The problem with rclone is already mentioned in another issues, I can't write in the S3 bucket,. Is there any solution for that?
Which issue? Cloud you give out a link to that? I've been using DigitalOcean Space and use csi-s3(using rclone) to mount the S3 compatible storage and everything works fine(e.g. Nginx web root, vaultwarden data dir), except for Postgres DATA(as csi mounted dir's ownership is root and postgres data dir's default ownership is postgres, when staring the pod always told me that '{this dir}' has wrong ownership and the pod entered the CrachLoopBackoff status, although I had tried every method I could find(initContainer to chmod to 0700, set securityContext and so on but nothing functioned, only by using DigitalOcean Block Storage and the problem got solved).
This is the same issue I have when I try to use rclone https://github.com/ctrox/csi-s3/issues/65 And if I try the solution proposed in that issue (use s3fs) i get the error i just mentioned you in this thread. Thanks in advance!
When I test this I get
Warning FailedMount 8s (x5 over 16s) kubelet MountVolume.MountDevice failed for volume "pvc-5e0210e2-e0ae-4797-a1c3-93283197d2c7" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name ru.yandex.s3.csi not found in the list of registered CSI drivers
driver name ru.yandex.s3.csi not found in the list of registered CSI drivers
Hi. It looks like that you are using Yandex Cloud(Block Storage as your pv)? If so, maybe you should use their official csi interface instead. And I've found this tutorial for you: Integration with Yandex Object Storage, hope it could help.
Same if I use ctrox:
Warning FailedMount 5s (x9 over 2m13s) kubelet MountVolume.MountDevice failed for volume "pvc-de09f6fb-2280-4336-b482-e76fac582301" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name ch.ctrox.csi.s3-driver not found in the list of registered CSI drivers
I've been using DigitalOcean Space and use csi-s3(using rclone) to mount the S3 compatible storage and everything works fine
Thanks @d0zingcat to give me confidence, so I make another try and this patch actually fixed https://github.com/ctrox/csi-s3/issues/16#issuecomment-1413180901 issue.
This is actually fix many reported AttachVolume.Attach failed
issues. This need to be merged again.
I'm getting an error when I try mounting a volume with s3fs. Do you have the same error?
I0421 12:59:46.375923 1 mounter.go:66] Mounting fuse with command: s3fs and args: [pvc-c86e8e4d-e422-4c57-8130-e9ced69b0420:/csi-fs /var/lib/kubelet/pods/b9907a71-51fa-4e67-bbca-51f064f711a4/volumes/kubernetes.io~csi/pvc-c86e8e4d-e422-4c57-8130-e9ced69b0420/mount -o use_path_request_style -o url=https://xxx.s3.eu-central-1.amazonaws.com/ -o endpoint= -o allow_other -o mp_umask=000]
E0421 12:59:46.434786 1 utils.go:101] GRPC error: stat /var/lib/kubelet/pods/b9907a71-51fa-4e67-bbca-51f064f711a4/volumes/kubernetes.io~csi/pvc-c86e8e4d-e422-4c57-8130-e9ced69b0420/mount: software caused connection abort