Closed scaleoutsean closed 1 week ago
Two things if you want to play with the S3 plugin...
1) An S3 service to talk to (could be Amazon, or minio, or zenko, etc.) -- this is what you would probably stand up with a new service in the top-level docker-compose.yml (we've considered this, but haven't done it yet).
2) The s3 resource plugin itself. The resource plugin can be installed from https://packages.irods.org. You would install it on the provider (see its README), and then configure it to talk to the S3 service (above).
Happy to leave this issue open as a feature request for the irods_demo itself.
Right, 1) is easy but it could be made a part of the demo Compose file to minimize surprises and make it easy to follow a handful commands that don't fail. Appreciate the willingness to consider this as feature request.
No problem.
How did you find this project? You're the first one in the door...
I've been working with parallel filesystems for a while, so I heard of iRODS a long time ago. But now I'm looking to create a simple PoC with BeeGFS-to-$something tiering with iRODS (and S3 seems easier to deploy than NFS v4 as destination tier :-)).
Fair enough - thanks!
Yes, NetApp is a consortium member, so please send us email directly if in-public isn't the right forum.
Cool! That's probably some tech or product group, but I don't know who exactly.
Anyway, I've added a MinIO container and S3 resource plugin (#42) - not sure if that's how it supposed to be done but it may save you some time. As I wrote in the comment, feel free to use whatever parts (or none at all) in own commit.
Maybe it'd be nice to have the MinIO client for easy verification (mc
, below) added to icommand container's Dockerfile, but I didn't want to bloat that image so in my case I installed it manually after docker-compose up
. I mention this because for some reason when I tried S3 resource plugin yesterday, iCommand container was unable to connect to existing S3 service endpoint on my LAN - it took me a while to realize it's not a config problem but a container networking problem of some sort, and then I thought it's probably better to deploy S3 together with the demo to avoid these situations.
irods@3cc28af82a1e:~$ date > sean-irods.txt
irods@3cc28af82a1e:~$ cat sean-irods.txt
Thu Nov 3 10:57:51 UTC 2022
irods@3cc28af82a1e:~$ iput -R s3resc sean-irods.txt --force
irods@3cc28af82a1e:~$ mc ls s3resc/irods/home/rods
[2022-11-03 10:58:00 UTC] 29B STANDARD sean-irods.txt
[2022-11-03 10:43:00 UTC] 0B STANDARD testfile.txt
irods@3cc28af82a1e:~$ rm sean-irods.txt
irods@3cc28af82a1e:~$ iget -R s3resc sean-irods.txt
irods@3cc28af82a1e:~$ date >> sean-irods.txt
irods@3cc28af82a1e:~$ cat sean-irods.txt
Thu Nov 3 10:57:51 UTC 2022
Thu Nov 3 10:58:24 UTC 2022
nice - thanks for the success copy/paste - great that it 'just worked'.
yes, adding a minio container is the right answer.
and we'll definitely consider if/where to install mc
.
The MinIO service and S3 plugin have been added. Getting a resource set up automatically is not done yet, so I will leave this open.
resource set up, and mc
installed ... somewhere? is it already available in the minio container?
It's not yet available - I think the idea was to make un invasive changes for phase v1, and then decide whether and how to automate the creation of bucket and client-side S3 config.
demobucket is now created by default in minio, and the s3 plugin is now configured to talk to that bucket.
closing!
Would it be possible to add to README.md advice on DIY for adding S3 resource plugin?
In the plugin's repo they say:
In this (containerized) scenario, would it be better to edit a Dockerfile (which one?) to deploy the plugin and which resource mode would be recommended?
I've tried building and installing inside a live container, but there's a lot of packages and deps that need to be taken care of. Perhaps editing a Dockerfile would be a better idea?