Closed ghost closed 10 years ago
That seems easy enough.
For SSL I wonder - should the URI be s3s:// or somesuch?
The one remaining key in the config file then at present is crypto_keyid - how might that be handled?
It should also be possible to pass the bucket name in via the command line arguments. Accepting an s3 prefixed URI like s3cmd does seems to be reasonable, for example: sfs3 put afile.txt s3://somebucketname/
Actually that one strikes me as really hard. The thing we have to do is cut up the path somewhere, into the "prefix" and the "dirname" within the root, because we insert that 'data/' or 'meta/' path into it.
E.g. given a config file bucket: 'a-bucket-and-some-jifjeakfjaljfeioahj/pevans-test/ Then sfs3 put afile.txt my-path/dir/afile.txt it will create the S3 object at the path s3://a-bucket-and-some-jifjeakfjaljfeioahj/pevans-test/data/my-path/dir/afile.txt
We couldn't just supply the combined s3 bucket+path as a URI to the put and get commands, as then they wouldn't know where to insert the 'data/' part.
Ok, we can always just use env vars for the bucket name and prefix as well.
Oops I missed the part about the crypto configuration - might as well stuff this in an env var too.
So I'm thinking {SFS3,AWS}_ACCESS_KEY {SFS3,AWS}_SECRET_KEY SFS3_BUCKET_PREFIX SFS3_CRYPTO_KEYID
The idea being to first load the config file if one exists, then test and override the env. vars., then complain at the end if any of the required values aren't set. This will mean that if all the required env. vars. exist then no config file is required.
The tool should be able to operate with out requiring a configuration file to be present. For the AWS authentication credentials the environment variables AWS_ACCESS_KEY and AWS_SECRET_KEY should be used. As well SFS3 prefixed versions of the same environment variables should take precedence over the standard AWS ones, for instance if both SFS3_ACCESS_KEY and AWS_ACCESS_KEY are present then SFS3_ACCESS_KEY should be used.
It should also be possible to pass the bucket name in via the command line arguments. Accepting an s3 prefixed URI like s3cmd does seems to be reasonable, for example: sfs3 put afile.txt s3://somebucketname/