Closed bryan-lifelink closed 3 years ago
I'm not sure if I understand you correctly, but I think you can find how streams should be handled in v3 here: https://github.com/octet-stream/form-data/issues/32
It is possible that the mention in the readme about deprecating the use of streams confused me. To make sure I undersand, here is what we have been doing:
fd.set('file', stream, fileName, { size: fileSize })
It looks like we could use the technique you mentioned in linked issue to get something similar working in v3. Assuming we can make that work for us, will this also work in v4, or is support for file streams expected to go away entirely at that point?
will this also work in v4, or is support for file streams expected to go away entirely at that point?
The major point of this package is being as spec-compatible as I can achieve. So there always be support for a File objects, no matter what source is used for it - just wrap it into the object that have File-ish shape. You can use this approach to send files sourced from any stream - it should work for you. But note that you'll need to handle form-data encoding manually to bypass Content-Length
header if you have any file with unknown size within FormData instance (as I mentioned in the linked issue).
Yes, we have the headers sorted out. Thank you!
This is kind of ish the problems I wanted to solve with https://github.com/node-fetch/fetch-blob/issues/99
imagine creating a file/blob from a readableStream and you already know how large the file is going to be. blobFromStream(readableStream, { size, type })
Another more dynamic approach would be:
blobFromCallback(function (start, end) {
// could be called multiple times durning blob#slice()
return fetch_partial_request_stream
}, { size, type })
I think I'm gonna clarify how to handle files sourced from a stream in v3 in readme, since it seems like you're not the only one who were confused by removing support of the Readable
streams.
We need to upload arbitrary sized files that we retrieve from the network. Currently we read the file size information separately, then create a readable stream from the network request to retrieve the file and pass that stream to
fd.set('file', ...)
. This allows us to quickly/efficiently upload files we might not be able to fit into memory. Is there a reasonable way to handle this use case using this module moving forward?