Closed rkfg closed 5 years ago
Hi @rkfg - thanks again for another patch to the s3 storage provider.
Thinking about it I think the option to be able to only upload local media is a good one, but we should make it an option rather than hard code it.
There's definitely situations, such as keeping a media_repository in a docker container without wanting a large persistent storage volume attached, where storing media in a s3 (or s3-compatible) bucket is preferred.
Hah, apologies, and further to this, I have been reminded by @erikjohnston that this is actually just what the store_local
, store_remote
flags are for, which already exist as options.
You should be able to not store remote content in S3 by using the store_remote: False
.
This will then leave it to other providers (such as a local, temporary, on-disk storage) to keep remote files, which may or may not be what you want - but if you want a transparent passthrough for remote content, I don't think this provider is the right place to put it.
I'm going to close this PR now, as I think it's been resolved by finding this option - but please re-open if you have further thoughts.
Intelligent tiering support was added in ad979a6bead960c2058bacf818a3ee51a9153bcf, we need to support it in the upload script. Also we don't need to upload remote files, it's a waste of storage and money. We can always retrieve them from the servers of origin. I'm not sure if we even need to index them to the database. If there's a strong reason for that I might add an option to exclude/include the remote files from upload.
Signed-off-by: Sergey Shpikin rkfg@rkfg.me