Is there a special reason why a file is build in-memory from the chunks? It seems like this could make a server run out of memory for many concurrent large uploads.
Wouldn't it work to create an empty resulting temporary file, open that in binary append mode, and then read each chunk and write/append it to the temporary file? This would only keep one chunk in memory and not all of them (plus one) at peak usage.
Is there a special reason why a file is build in-memory from the chunks? It seems like this could make a server run out of memory for many concurrent large uploads.
Wouldn't it work to create an empty resulting temporary file, open that in binary append mode, and then read each chunk and write/append it to the temporary file? This would only keep one chunk in memory and not all of them (plus one) at peak usage.