axman6 / amazonka-s3-streaming

Provides a conduit based interface to uploading data to S3 using the Multipart API
MIT License
20 stars 23 forks source link

Reuse buffer for streamingUpload #34

Open axman6 opened 1 year ago

axman6 commented 1 year ago

In streamingUpload, a new buffer is allocated for every chunk in the call to finaliseS. A better idea might be to allocate one buffer and reuse it - at the moment there shouldn't be any possibility of concurrent access to the buffer between different requests.

Taking this one step further, instead of using the S abstraction, we could write directly into this shared buffer when each ByteString is received. This would mean needing to be very careful about ordering so we don't start writing the next chunk into the buffer before sending it, but I think that should be handled by yield passing the chunk to be sent before the conduit continues.

endgame commented 1 year ago

I would also be interested in concurrent uploads, even when drawing from a stream. Buffering N×chunksize blocks of memory doesn't seem that bad, but getting concurrent uploads and reused buffers right at the same time could be really hard.