efrecon / docker-s3fs-client

Alpine-based s3fs client: mount from container, make available to other containers
BSD 3-Clause "New" or "Revised" License
173 stars 62 forks source link

Volume on Host / Containers not reflecting Bucket Contents #42

Closed logicalor closed 1 year ago

logicalor commented 1 year ago

OS: Ubuntu 22.04 Docker Version: 20.10.22

Sample Docker-Compose:

version: "3.6"

services:
  php-fpm:
    container_name: "php-fpm"
    build:
      context: ./services/php-fpm
      dockerfile: Dockerfile
    volumes:
      ...
      - $VOLUME_S3FS_PUBLIC:/var/www/html/sites/default/files
      ...
    depends_on:
      - s3fs-public
  ...
  s3fs-public:
    container_name: "s3fs-public"
    image: efrecon/s3fs:1.91
    environment:
      AWS_S3_BUCKET: $MEDIA_S3_BUCKET_PUBLIC
      AWS_S3_ACCESS_KEY_ID: $MEDIA_S3_KEY
      AWS_S3_SECRET_ACCESS_KEY: $MEDIA_S3_SECRET
      AWS_S3_MOUNT: '/opt/s3fs/bucket'
      S3FS_DEBUG: 1
      S3FS_ARGS: ''
    devices:
      - /dev/fuse
    cap_add:
      - SYS_ADMIN
    security_opt:
      - "apparmor:unconfined"
    volumes:
      - '${VOLUME_S3FS_PUBLIC}:/opt/s3fs/bucket:rshared'

The issue I'm having is when I run docker compose up against the above config (some other containers and env vars omitted), the s3fs volumes don't appear to be shared with the host or containers.

This is the output from a docker compose log for the s3fs-public container:

s3fs-public   | Mounting bucket dev-website-public onto /opt/s3fs/bucket, owner: 0:0
s3fs-public   | FUSE library version: 2.9.9
s3fs-public   | nullpath_ok: 0
s3fs-public   | nopath: 0
s3fs-public   | utime_omit_ok: 1
s3fs-public   | unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
s3fs-public   | INIT: 7.34
s3fs-public   | flags=0x33fffffb
s3fs-public   | max_readahead=0x00020000
s3fs-public   |    INIT: 7.19
s3fs-public   |    flags=0x00000039
s3fs-public   |    max_readahead=0x00020000
s3fs-public   |    max_write=0x00020000
s3fs-public   |    max_background=0
s3fs-public   |    congestion_threshold=0
s3fs-public   |    unique: 2, success, outsize: 40

If I docker exec s3fs-public sh and navigate to ./bucket I can see the contents of the remote s3 bucket. But if I am on the host and navigate to $VOLUME_S3FS_PUBLIC (which the container creates - in this case /media/s3fs-public) then I can't see the contents of the remote s3 bucket. Similarly, if I docker exec php-fpm bash and navigate to /var/www/html/sites/default/files I can't see the contents of the remote s3 bucket either.

I have also tried cloning this repo, setting my S3 credentials in a .env, and running docker compose up against the untouched docker-compose.yml file, but am getting the same result - i.e. can't see the remote s3 files in ./bucket.

Is there additional configuration I need to make in order for the mounted s3fs to be shared with the host and other containers?

Thanks.

efrecon commented 1 year ago

Are you running this against something else than AWS? In that case, you would need to specify the URL at which to contact the S3 API, e.g. https://s3.yourprovider.com or similar. Tell me if it helps.

nrukavkov commented 1 year ago

@efrecon I had the same problem. Inside container files are existing. But in volume nothing. I tried to use docker volumes and volume mapping to host machine. The same behaviour.

AWS_S3_BUCKET=BYBUCKET
AWS_S3_ACCESS_KEY_ID=MYID
AWS_S3_SECRET_ACCESS_KEY=MYKEY
AWS_DEFAULT_REGION=MYREGION
AWS_S3_URL=https://s3.provider
S3FS_ARGS=use_path_request_style
  s3fs:
    cap_add:
    - SYS_ADMIN
    security_opt:
      - apparmor:unconfined
    privileged: true
    image: efrecon/s3fs:1.91
    restart: unless-stopped
    env_file: .env
    volumes:
      - s3data:/opt/s3fs/bucket:rshared
  test:
    image: bash:latest
    restart: unless-stopped
    depends_on:
      - s3fs
    # Just so this container won't die and you can test the bucket from within
    command: sleep infinity
    volumes:
      - s3data:/data:rshared
volumes:
  s3data:
nrukavkov commented 1 year ago

I did an experiment. I opened shell in s3fs and umount /opt/s3fs/bucket. Then I tried to create a folder. And this folder was showed in another container.

Then I deleted folder 'test' and run tini -g -- docker-entrypoint.sh again. And got that in second container nothing

Also I build a new image from ubuntu latest and there is the same problem

truesteps commented 1 year ago

@logicalor heya! did you manage to figure out a fix to this? I have the same issue, when i exec into the container and modify the contents of the bucket folder, it works, but not when i mount it to the host and then mount things from the host into the other containers

truesteps commented 1 year ago

It seems as if the volume from the container is not getting mapped to the host container from what i'm seeing, 'cause all the other services mount properly to the host, except s3fs

truesteps commented 1 year ago

@logicalor I figured out the issue with the help of my friend and trial / error... There is a bug in docker compose plugin it seems https://github.com/docker/compose/issues/9380 so when you run it using docker compose up it just wont work since the propagation doesn't get updated no matter what you put in docker-compose, you can check it by docker inspect {container_name} and check propagation under the moutns section.

Fixed by uninstalling docker-compose-plugin and installing standalone docker-compose

TheNexter commented 1 year ago

Enjoy everyone : https://github.com/TheNexter/Rclone-Mount-Docker-Compose @truesteps @nrukavkov @logicalor

truesteps commented 1 year ago

@TheNexter thanks :) i already figured it out tho, the issue was me using the compose plugin for docker instead of standalone docker-compose... Unfortunately setting propagation just plain didn't work with the compose plugin

tab10 commented 1 year ago

Just some tips from my experience:

Possible workarounds:

Thanks to the package contributors for your efforts!

efrecon commented 1 year ago

Thanks for figuring this out. I have added a mention of this issue in the main README.