Closed fxn closed 2 years ago
It's not possible with the existing encoder, no. It wasn't considered primarily because in my experience, most servers are too old to properly handle chunked TE + multipart/form-data content-type and most want concrete content-length headers. (Prime example is that S3 requires a Content-Length for multipart/form-data
uploads last I checked, but that hasn't been in years)
Interesting!
I find surprising that it depends on the Content-Type, since code processing a request body should have chunked encoding and compression handled transparently in lower layers on their behalf.
Cool. I'll close then, if I come with a solution myself I could followup. Perhaps I try to change the requirement to be able to POST the file directly.
Thanks for you prompt reply!
So I think that's one version of the upload API on S3. It's annoying though. And I didn't encounter anything that supported chunked encoding. That doesn't mean it doesn't exist. I suspect we could refactor things a bit to make it possible to use one or the other but I'm also trying to move this into urllib3 directly so not as interested in doing that work here
I want to upload as
multipart/form-data
CSV exports from PostgreSQL as the cursor retrieves records, without intermediate files.In this use case, I do not know the size of the data beforehand. Would like to pass an iterator, or generator, or something, and have the request use chunked encoding.
This does not seem to be possible with the existing encoder, right?