I had some problems when due to some configuration problem an s3 bucket was not available, then the container still considered the mount to be successful.
This is because s3fs might release that there's a problem just a few moments after the command going to the background.
So if you check the output of "mount" right after s3fs exits, it would show the mount point even though it didn't actually succeed.
Hi,
I had some problems when due to some configuration problem an s3 bucket was not available, then the container still considered the mount to be successful.
This is because s3fs might release that there's a problem just a few moments after the command going to the background.
So if you check the output of "mount" right after s3fs exits, it would show the mount point even though it didn't actually succeed.
I added a workaround for that in fork. The solution doesn't seem perfectly clean, but it appears to work. I can create a pull request if you would like to merge that: https://github.com/totycro/docker-s3fs-client/commit/012d136bfa1397c82ed40699b120671ed1622e60