Open slenky opened 3 years ago
One workaround could be to have a liveness check which will "ls" the mounted folders every 10 seconds.
That's a really cool workaround, btw. I was toying with the idea of remounting previous mounts after container crash, but it means credentials and mountpoints have to be stored on some persistent drive. And I'm still not sure mount itself still persists. And it would mean some development time is required.
Another idea I had was using golang native bindings from librclone (if I understood corectly what it does) instead of binary fuse mount and that might work even better. Still, not sure how to use (even the smallest golang example would work) it and it would require def time.
That would be awesome @sarendsen ! Even a simple example wrapper would do for initial tests, to see how it works with csi rpc calls.
When i have to upgrade this plugin, i just kill pods mounting the storage class and it remounts them. It does not happen too often and I have not seen it crash on it's own.
Hello,
Thank you for this CSI, I have tested it both with Minio and AWS S3 and it works like a charm. However, there is an issue which also persists in csi-s3 - if controller or daemonset pod restarts - you cannot longer access any mounted directory until you restart that pod manually.
Related: https://github.com/ctrox/csi-s3/issues/34