Open yufan022 opened 1 year ago
Try using CREATE STAGE my_s3_stage url='s3://load/files/' connection=(role_arn='xxxxxxxxx');
๐let me try
Currently, we use it like this:
# FUSE bucket databend-query.toml
[storage]
type = "s3"
allow_insecure = true
[storage.s3]
bucket = "<your-bucket-name>"
endpoint_url = "<your-endpoint>"
# without aksk
# access_key_id = "<your-key-id>"
# secret_access_key = "<your-account-key>"
---
# copy into stage
CREATE STAGE sharding_$INDEX url = 's3://xx/$INDEX/';
COPY INTO log.sharding_$INDEX from @sharding_$INDEX pattern='.*[.]csv$' FILE_FORMAT = (TYPE = 'CSV' FIELD_DELIMITER = '\t' RECORD_DELIMITER = '\n') PURGE=true
Summary refer: https://github.com/datafuselabs/opendal/issues/1088
When users deploy their own Databend, it is better to execute
COPY INTO/STAGE
withoutAWS KEY
.now:
target:
https://github.com/datafuselabs/opendal/issues/1088#issuecomment-1358903695 As @Xuanwo mentioned, The
Databend-Cloud
and theuser-deployed environment
have different considerations, it is better to add an option.