CircleCI-Public / aws-s3-orb

Integrate Amazon AWS S3 with your CircleCI CI/CD pipeline easily with the aws-s3 orb.
https://circleci.com/orbs/registry/orb/circleci/aws-s3
MIT License
10 stars 20 forks source link

"Unknown options" error when running copy #45

Closed michaldudak closed 1 year ago

michaldudak commented 1 year ago

Orb Version 3.1.1

Describe the bug After updating from 3.0.0 to 3.1.1, the copy command fails throwing the Unknown options: --content-type application/json error. According to the AWS CLI docs, this argument is supported (https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html).

To Reproduce

Steps to reproduce the behavior:

  1. Go to https://app.circleci.com/pipelines/github/mui/material-ui/86615/workflows/7be373c8-4164-4c81-9645-bbf0c4743e0e/jobs/460113
  2. Click on the failed step
  3. See error

Expected behavior The command succeeds, as in the previous version.

Additional context The config of the failing step is at https://github.com/mui/material-ui/blob/0d3ea74d1ef76f06deff16984d13608d388cfed9/.circleci/config.yml#L721-L737

toote commented 1 year ago

Issue appears to be the handling of multi-word arguments:

https://github.com/CircleCI-Public/aws-s3-orb/blob/master/src/scripts/copy.sh#L7

As you can see, that line clearly indicates that all arguments are to be passed through as a single string. So you are not passing a --content-type option with application/json as an argument, you are passing --content-type application/json as a single big thing.

Issue also happens on sync commands as defined in the examples because of the same line

https://github.com/CircleCI-Public/aws-s3-orb/blob/master/src/scripts/sync.sh#L7

Unknown options: --acl public-read --cache-control max-age=86400
speque commented 1 year ago

I encountered this one, too.

phoenix2x commented 1 year ago

Same: "Unknown options: --acl public-read --cache-control max-age=86400"

rajeshbolisetty commented 1 year ago

Same Error Unknown options: --acl public-read --recursive --cache-control max-age=8640

silentsakky commented 1 year ago

I am facing same issue

#!/bin/bash -eo pipefail
#!/bin/sh
PARAM_AWS_S3_FROM=$(eval echo "${PARAM_AWS_S3_FROM}")
PARAM_AWS_S3_TO=$(eval echo "${PARAM_AWS_S3_TO}")
PARAM_AWS_S3_ARGUMENTS=$(eval echo "${PARAM_AWS_S3_ARGUMENTS}")

if [ -n "${PARAM_AWS_S3_ARGUMENTS}" ]; then
    set -- "$@" "${PARAM_AWS_S3_ARGUMENTS}"
fi

if [ -n "${PARAM_AWS_S3_PROFILE_NAME}" ]; then
    set -- "$@" --profile "${PARAM_AWS_S3_PROFILE_NAME}"
fi

aws s3 sync "${PARAM_AWS_S3_FROM}" "${PARAM_AWS_S3_TO}" "$@"

Unknown options: --acl public-read

Exited with code exit status 252
CircleCI received exit code 252
andrewfhw commented 1 year ago

I am receiving the same error when using the following configuration:

- s3/sync:
        arguments: |
          --acl public-read \
          --delete
        from: apps/myapp
        to: 's3://my-bucket'

Then I get the error:

#!/bin/bash -eo pipefail
#!/bin/sh
PARAM_AWS_S3_FROM=$(eval echo "${PARAM_AWS_S3_FROM}")
PARAM_AWS_S3_TO=$(eval echo "${PARAM_AWS_S3_TO}")
PARAM_AWS_S3_ARGUMENTS=$(eval echo "${PARAM_AWS_S3_ARGUMENTS}")

if [ -n "${PARAM_AWS_S3_ARGUMENTS}" ]; then
    set -- "$@" "${PARAM_AWS_S3_ARGUMENTS}"
fi

if [ -n "${PARAM_AWS_S3_PROFILE_NAME}" ]; then
    set -- "$@" --profile "${PARAM_AWS_S3_PROFILE_NAME}"
fi

aws s3 sync "${PARAM_AWS_S3_FROM}" "${PARAM_AWS_S3_TO}" "$@"

Unknown options: --acl public-read --delete

Exited with code exit status 252

If I pass only the --delete argument, I get no error. If I pass --acl=public-read I also get no error. The error only seems to occur when I pass more than one argument or an argument with a space.

I downgraded to version 3.0.0 and I do not get the error.

olivierto commented 1 year ago

Same here, must downgrade to 3.0.0

brivu commented 1 year ago

Hey Everyone - I just wanted to follow up here. I fixed this issue with PR https://github.com/CircleCI-Public/aws-s3-orb/pull/54.

Just a note though, all argument values must be passed in a single line like you have above. Multi-line values will cause jobs to fail.

I'll cutting a new release soon with this fix available. Thanks for your patience on this!