Install duckdb, boto3, awscli, also the duckdb binary
Run the docker container with your local .aws directory mounted to the one in the container
Inside the docker container, run aws s3 ls you might need to do an export AWS_PROFILE=profilename
If you can see the buckets listed, it means that credentials sharing between the container and host works fine
run duckdb with ./duckdb
Run call load_aws_credentials();
It doesn't show anything, even after specifying the name of the profile with call load_aws_profile('profilename')
Running it from python also does't work
Dockerfile
FROM python:3.11-slim
RUN pip install poetry --no-cache-dir
RUN apt update && apt install awscli wget unzip -y
RUN wget https://github.com/duckdb/duckdb/releases/download/v0.10.1/duckdb_cli-linux-amd64.zip
RUN mkdir /app
RUN unzip duckdb_cli-linux-amd64.zip -d /app
RUN rm duckdb_cli-linux-amd64.zip
WORKDIR /app
Build the container
docker build -t duckdb .
Command to run the container locally
docker run -it -v $(pwd):/app -v /Users/home/.aws:/root/.aws duckdb /bin/bash
once inside the container, run duckdb with
./duckdb and then run call load_aws_credentials()
They're empty as it is shown here
As it shows here, running the command inside the container creates an empty table like this
Since the .aws directory is shared, the credentials are there and with the awscli installed in the container, you can check if they're valid with
aws s3 ls
What could it be?
I think it might've something to do with permissions on the files in .aws, on how these credentials are accessed. Unfortunately, I don't know how to debug or trace what happens when the function is called for.
What happens?
When running duckdb inside a docker container, the function
call load_aws_credentials();
gives empty rows. Running it outside the container works.To Reproduce
Setup 1 (Local machine)
Install duckdb with brew install duckdb
Configure aws cli (in my case sso)
check with
aws s3 ls
that you can list the buckets in the chosen profileList files in the chosen bucket
Open duckdb
In the the duckdb console run
call load_aws_credentials();
It should display the secrets
Run
select * from read_csv(s3://bucket_path)
It works Sample output of
call load_aws_credentials();
locallyStep 2: Docker container
Build a docker image
Install duckdb, boto3, awscli, also the duckdb binary
Run the docker container with your local
.aws
directory mounted to the one in the containerInside the docker container, run
aws s3 ls
you might need to do anexport AWS_PROFILE=profilename
If you can see the buckets listed, it means that credentials sharing between the container and host works fine
run duckdb with
./duckdb
Run
call load_aws_credentials();
It doesn't show anything, even after specifying the name of the profile with
call load_aws_profile('profilename')
Running it from python also does't work
Dockerfile
Build the container
Command to run the container locally
once inside the container, run duckdb with
./duckdb
and then runcall load_aws_credentials()
They're empty as it is shown here
As it shows here, running the command inside the container creates an empty table like this
Since the
.aws
directory is shared, the credentials are there and with the awscli installed in the container, you can check if they're valid withaws s3 ls
What could it be?
I think it might've something to do with permissions on the files in
.aws
, on how these credentials are accessed. Unfortunately, I don't know how to debug or trace what happens when the function is called for.OS:
x64 docker, Mac OS aarch64 host
DuckDB Version:
v0.10.1 4a89d97db8
DuckDB Client:
Bash, Python
Full Name:
Pratyay Modi
Affiliation:
None
Have you tried this on the latest nightly build?
I have tested with a release build (and could not test with a nightly build)
Have you tried the steps to reproduce? Do they include all relevant data and configuration? Does the issue you report still appear there?