Resource to upload files to S3. Unlike the the official S3 Resource, this Resource can upload or download multiple files.
Include the following in your Pipeline YAML file, replacing the values in the angle brackets (< >
):
resource_types:
- name: <resource type name>
type: docker-image
source:
repository: 18fgsa/s3-resource-simple
resources:
- name: <resource name>
type: <resource type name>
source:
access_key_id: {{aws-access-key}}
secret_access_key: {{aws-secret-key}}
bucket: {{aws-bucket}}
path: [<optional>, use to sync to a specific path of the bucket instead of root of bucket]
change_dir_to: [<optional, see note below>]
options: [<optional, see note below>]
region: <optional, see below>
jobs:
- name: <job name>
plan:
- <some Resource or Task that outputs files>
- put: <resource name>
The access_key_id
and secret_access_key
are optional and if not provided the EC2 Metadata service will be queried for role based credentials.
The change_dir_to
flag lets you upload the contents of a sub-directory without including the directory name as a prefix in your bucket.
Given the following directory test
:
test
├── 1.json
└── 2.json
and the config:
- name: test
type: s3-resource-simple
source:
change_dir_to: test
bucket: my-bucket
[...other settings...]
put
will upload 1.json and 2.json to the root of the bucket. By contrast, with change_dir_to
set to false
(the default), 1.json and 2.json will be uploaded as test/1.json
and test/2.json
, respectively.
This flag has no effect on get
or check
.
The options
parameter is synonymous with the options that aws cli
accepts for sync
. Please see S3 Sync Options and pay special attention to the Use of Exclude and Include Filters.
Given the following directory test
:
test
├── results
│ ├── 1.json
│ └── 2.json
└── scripts
└── bad.sh
we can upload only the results
subdirectory by using the following options
in our task configuration:
options:
- "--exclude '*'"
- "--include 'results/*'"
Interacting with some AWS regions (like London) requires AWS Signature Version
region: eu-west-2