Open mrcnski opened 4 years ago
I use upload_file_request_with_chunks
with a function that yields bytes, for this. The downside is that it seems if the upload is interrupted on the network, the server doesn't know it and I sometimes get skylinks for short content returned.
EDIT: it doesn't look like this works any more in the new version.
Requests does support this: https://requests.readthedocs.io/en/master/user/advanced/#streaming-uploads
EDIT: I also checked the requests source for streaming with known length, and it looks like it calls a super_len()
function in utils.py
from the request handler in models.py
and this very flexible helper tries everything it can on the data passed and saves what works. Notably it tries calling .__len__()
and also reading the .len
attribute, so in theory specifying one of these should work fine.
EDIT: Just for completion, a file-like object can be passed just like a chunked iterator. It worked for me to implement streaming progress by inheriting from io.FileIO
or io.BufferedReader
and overloading read
.
We should support uploading generic data using streams.
We will need to use the https://toolbelt.readthedocs.io/ library for this as
requests
does not support this.