There were actually two issues that broke authenticated buckets. The first was that we weren't properly setting the host header. The second was that we weren't generating a valid canonical URI that gets signed and sent as a part of the Authorization header. The solution to both of these was to parse the hostUrl more carefully, and in only one spot.
Testing this is a bit of a challenging setup, so I'll try to loosely sketch this out here. This is from my notes, and it assumes you're running in a linux container as root. The container will need all the tools for building c++, as well as xrootd and the minio server binary. You'll also need a non-root user to run the xrootd process under. I'm also assuming you have some experience dealing with XRootD and all its funkiness.
Download & install minio server
Then, from terminal:
export MINIO_ROOT_USER="admin"
export MINIO_ROOT_PASSWORD="password"
The plugin operates under "virtual hosted" requests instead of amazon's old "path style" requests. To get minio working in this setup, you need to export the env var:
export MINIO_DOMAIN=`hostname`
And you also need to add this line to bottom of /etc/hosts:
172.17.0.XXXX <bucket name>.<hostname>
where the '172.17.0.XXX' ip address should match the other IPs you see there, and where <bucket name> is the name of a
bucket you plan to create or have already created. I usually use 'test-bucket'
Next, launch the server to the path you want to export:
MINIO_ROOT_USER=$MINIO_ROOT_USER MINIO_ROOT_PASSWORD=$MINIO_ROOT_PASSWORD minio server `pwd`/test --console-address ":9001"
Now log into the server via browser to config test buckets: http://localhost:9001
user = admin
password = password
Then create a bucket (I usually call it test-bucket, explained above) and add a test file. Set the bucket to "private". Create
a user with "readwrite" permissions and configure an access/secret key for the user. Keep track of them, and write them
to `/etc/xrootd/access.key` and `/etc/xrootd/secret.key`
After building the plugin, you can test by using this xrootd config:
all.export /
xrd.protocol http:8080 libXrdHttp.so
# Setting up S3 plugin
ofs.osslib <path to libXrdS3.so>
xrootd.async off
s3.service_name s3.amazonaws.com
s3.region us-east-1
# Next is whatever url your minio server gives you as its s3 endpoint, usually like below
s3.service_url http://<hostname>:9000
# These should point to wherever you actually wrote the keys to
s3.access_key_file /etc/xrootd/access.key
s3.secret_key_file /etc/xrootd/secret.key
ofs.trace all
xrd.trace all -sched
http.trace all
Run the plugin under your non-root user: xrootd -c <configfile name>. You should get through all of the xrootd initialization.
Phew, you made it through all that! Give yourself a pat on the shoulder and do a quick stretch.
Finally, you should be able to get the test file from your s3 endpoint via your browser at:
http://localhost:1094/s3.amazonaws.com/us-east-1/<bucket name>/<object name>
There were actually two issues that broke authenticated buckets. The first was that we weren't properly setting the host header. The second was that we weren't generating a valid canonical URI that gets signed and sent as a part of the Authorization header. The solution to both of these was to parse the hostUrl more carefully, and in only one spot.
Testing this is a bit of a challenging setup, so I'll try to loosely sketch this out here. This is from my notes, and it assumes you're running in a linux container as root. The container will need all the tools for building c++, as well as xrootd and the minio server binary. You'll also need a non-root user to run the xrootd process under. I'm also assuming you have some experience dealing with XRootD and all its funkiness.
After building the plugin, you can test by using this xrootd config:
Run the plugin under your non-root user:
xrootd -c <configfile name>
. You should get through all of the xrootd initialization.Phew, you made it through all that! Give yourself a pat on the shoulder and do a quick stretch.
Finally, you should be able to get the test file from your s3 endpoint via your browser at:
http://localhost:1094/s3.amazonaws.com/us-east-1/<bucket name>/<object name>
Closes #7