Open bryzgaloff opened 2 years ago
Looks related to #29492
i'd subscribe to the bug, because my problem still not solved
@bryzgaloff Did you ever solve this?
Hi @dlahn I have decided to use explicit AWS credentials in s3(…)
call.
All it does is sets use_environment_credentials=true for my bucket
By the way, this is no longer required, as we use environment credentials by default.
I'm experiencing same issue with S3 connectivity with AWS Credentials Provider (when using AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
OR AWS_CONTAINER_CREDENTIALS_ABSOLUTE_URI
.
These are the s3 and storage configs:
s3.xml: |-
<clickhouse>
<s3>
<use_environment_credentials>true</use_environment_credentials>
</s3>
</clickhouse>
storage.xml: |-
<clickhouse>
<storage_configuration>
<disks>
<default>
<keep_free_space_bytes>1024</keep_free_space_bytes>
</default>
<s3_disk>
<type>s3</type>
<endpoint>https://ib-dl-saas-clickhouse-box-4.s3.us-east-1.amazonaws.com/s3_disk/</endpoint>
<metadata_path>/var/lib/clickhouse/disks/s3_disk/</metadata_path>
</s3_disk>
</disks>
<policies>
<jbod>
<volumes>
<ebs_volume>
<disk>default</disk>
<max_data_part_size_bytes>107374182400</max_data_part_size_bytes>
</ebs_volume>
<s3_volume>
<disk>s3_disk</disk>
<prefer_not_to_merge>true</prefer_not_to_merge>
</s3_volume>
</volumes>
<move_factor>0.2</move_factor>
</jbod>
<s3_policy>
<volumes>
<s3_volume>
<disk>s3_disk</disk>
<prefer_not_to_merge>true</prefer_not_to_merge>
</s3_volume>
</volumes>
</s3_policy>
</policies>
</storage_configuration>
</clickhouse>
Experiencing the same issue on this one, anyone got any pinpoints as to where the issue might be? Happy to contribute, and appreciate any input on where I might look to start with 😄
Probably also related to #43820?
Also seeing this issue trying to set-up s3 as the data store for an ECS cluster running clickhouse.
Trivially reproducible in CloudShell:
Reproducible without ECS:
AWS_CONTAINER_CREDENTIALS_FULL_URI=http://localhost:1338/latest/meta-data/container/security-credentials ch --query "SELECT * FROM s3('s3://clickhouse-public-datasets/tranco/*')"
The bug was introduced here: https://github.com/ClickHouse/ClickHouse/pull/13404
Thanks @alexey-milovidov 🙏
Describe what's wrong
I am trying to execute
select * from s3(…)
and ClickHouse dies with the following output:Does it reproduce on recent release?
I use Docker image
yandex/clickhouse-server:21.12.3.32
.How to reproduce
I have defined the following Dockerfile:
All it does is sets
use_environment_credentials=true
for my bucket ineu-central-1
AWS region. This is required since I am running ClickHouse on ECS Fargate (in a container) with an IAM role attached to a task.Here is service definition in Terraform:
When using s3 table function without config file (running container from pure
yandex/clickhouse-server:21.12.3.32
image without my custom Dockerfile), I was able to read this file from S3 by providing explicit AWS credentials. Now, when configuration is added, the operation fails even with AWS credentials explicitly provided. The error is the same: see traceback above, ClickHouse container dies.Solution also works when container is run using
docker run
simply on an EC2 instance (without Fargate service). No error also appears if the service is run without task role attached (task_role_arn=null
above).Expected behavior
I expect s3 table function to use
$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
whenuse_environment_credentials
enabled, and looks like codebase supports it:https://github.com/ClickHouse/ClickHouse/blob/35883e0dae7be1ffa8948e5c56a168262fc7366f/src/IO/S3Common.cpp#L524-L526
AWS documentation on AWS_CONTAINER_CREDENTIALS_RELATIVE_URI.
S3 endpoint settings documentation.