efrecon / docker-s3fs-client

Alpine-based s3fs client: mount from container, make available to other containers
BSD 3-Clause "New" or "Revised" License
181 stars 64 forks source link

GROUP_NAME vs GID #15

Closed jonashartwig closed 3 years ago

jonashartwig commented 3 years ago

Hi,

I need to mount the s3fs directory as 50000:0. So I pass as env variables UID=50000 and GID=0. I see that there is a line in your docker-entrypoint.sh that has a GROUP_NAME variable that is created. That code (maybe it was a fix already) is not present in the latest image:

docker pull efrecon/s3fs
Using default tag: latest
latest: Pulling from efrecon/s3fs
Digest: sha256:56c87521168d51da8bd81a6ecbfa5097c30c839fa449858724cbca7a15fea926
Status: Image is up to date for efrecon/s3fs:latest
docker.io/efrecon/s3fs:latest
run -it --entrypoint="" efrecon/s3fs sh       
/opt/s3fs # cat /usr/local/bin/docker-entrypoint.sh 
#! /usr/bin/env sh

# Where are we going to mount the remote bucket resource in our container.
DEST=${AWS_S3_MOUNT:-/opt/s3fs/bucket}

# Check variables and defaults
if [ -z "${AWS_S3_ACCESS_KEY_ID}" -a -z "${AWS_S3_SECRET_ACCESS_KEY}" -a -z "${AWS_S3_SECRET_ACCESS_KEY_FILE}" -a -z "${AWS_S3_AUTHFILE}" ]; then
    echo "You need to provide some credentials!!"
    exit
fi
if [ -z "${AWS_S3_BUCKET}" ]; then
    echo "No bucket name provided!"
    exit
fi
if [ -z "${AWS_S3_URL}" ]; then
    AWS_S3_URL="https://s3.amazonaws.com"
fi

if [ -n "${AWS_S3_SECRET_ACCESS_KEY_FILE}" ]; then
    AWS_S3_SECRET_ACCESS_KEY=$(read ${AWS_S3_SECRET_ACCESS_KEY_FILE})
fi

# Create or use authorisation file
if [ -z "${AWS_S3_AUTHFILE}" ]; then
    AWS_S3_AUTHFILE=/opt/s3fs/passwd-s3fs
    echo "${AWS_S3_ACCESS_KEY_ID}:${AWS_S3_SECRET_ACCESS_KEY}" > ${AWS_S3_AUTHFILE}
    chmod 600 ${AWS_S3_AUTHFILE}
fi

# forget about the password once done (this will have proper effects when the
# PASSWORD_FILE-version of the setting is used)
if [ -n "${AWS_S3_SECRET_ACCESS_KEY}" ]; then
    unset AWS_S3_SECRET_ACCESS_KEY
fi

# Create destination directory if it does not exist.
if [ ! -d $DEST ]; then
    mkdir -p $DEST
fi

# Add a group
if [ $GID -gt 0 ]; then
    addgroup -g $GID -S $GID
fi

# Add a user
if [ $UID -gt 0 ]; then
    adduser -u $UID -D -G $GID $UID
    RUN_AS=$UID
    chown $UID $AWS_S3_MOUNT
    chown $UID ${AWS_S3_AUTHFILE}
    chown $UID /opt/s3fs
fi

# Debug options
DEBUG_OPTS=
if [ $S3FS_DEBUG = "1" ]; then
    DEBUG_OPTS="-d -d"
fi

# Mount and verify that something is present. davfs2 always creates a lost+found
# sub-directory, so we can use the presence of some file/dir as a marker to
# detect that mounting was a success. Execute the command on success.

su - $RUN_AS -c "s3fs $DEBUG_OPTS ${S3FS_ARGS} \
    -o passwd_file=${AWS_S3_AUTHFILE} \
    -o url=${AWS_S3_URL} \
    -o uid=$UID \
    -o gid=$GID \
    ${AWS_S3_BUCKET} ${AWS_S3_MOUNT}"

# s3fs can claim to have a mount even though it didn't succeed.
# Doing an operation actually forces it to detect that and remove the mount.
ls "${AWS_S3_MOUNT}"

mounted=$(mount | grep fuse.s3fs | grep "${AWS_S3_MOUNT}")
if [ -n "${mounted}" ]; then
    echo "Mounted bucket ${AWS_S3_BUCKET} onto ${AWS_S3_MOUNT}"
    exec "$@"
else
    echo "Mount failure"
fi

could you please upload a new image with that "fix"?

regards

jonashartwig commented 3 years ago

so the latest is not updated. tag 1.90 has the proper content. closing this.