Closed subnix closed 1 year ago
I understand the need for this patch, but one of the foundations of depot is to provide the same exact interface and features on all its backend (switching from a backend to another shouldn't involve any code change) as the purpose is to allow to use in-memory for tests, local for development, s3 for production and so on.
I don't think there is a way to provide a similar feature on boto and boto3 storages, they can read from a stream, but they can't provide a stream you can write to.
What about multipart upload in S3? We can split input stream to parts and upload them via s3 low-level API.
Closing this one for the moment, it's still uncertain that streamed uploads can be easily implemented in all backends and we want to ensure all storages supported by depot guarantee the same minimum set of functionalities.
Sometimes we need to upload large files into storage or pass stream to another function. We had only one way before - put file into filesystem and then pass file descriptor to depot, but it requires some space in the filesystem and file processing time increases. I've added
create_stream
method for the interface and created local storage and GridFS implementations.