Closed mrdulin closed 5 years ago
It doesn't have a built in method for resuming uploads if the connection is interrupted, if that is what you mean.
It streams the uploads into the resolvers; so you can decide what to do if the stream disconnects. You could keep the partially uploaded file in storage, and have a way to query the number of bytes uploaded. For the client to resume the upload, it could use the mutation / upload ID to query the progress, cut the File
into a Blob
with just the remaining bytes, then upload it using the right ID. The resolver would then stream the remaining bytes onto the end of the Blob in storage.
That might be totally impractical tho, since the client might not have the ability or resources to cut up the File
into Blob
chunks. Just thinking out loud.
Another idea, on the client you could cut up the File
into Blob
chunks and make multiple mutations – perhaps even in one request:
mutation($uploadId: ID!, $chunk1: Upload!, $chunk2: Upload!) {
uploadChunk(uploadId: $uploadId, chunkPos: 1, chunk: $chunk1) {
# Payload
}
uploadChunk(uploadId: $uploadId, chunkPos: 2, chunk: $chunk2) {
# Payload
}
}
Closing because the question has been answered, but feel free to continue the conversation :)
If a clearly defined feature request emerges, we can raise an issue / PR.
@jaydenseric Wonderful. Very helpful. Thanks.
I agree with you. Using GraphQL
batch query/mutation to upload each chunk maybe a solution
I am not very clear about
That might be totally impractical tho, since the client might not have the ability or resources to cut up the File into Blob chunks. Just thinking out loud.
Do you mean the memory of the client(the browser) is not enough to read large file like 4GB size file?
Do you mean the memory of the client(the browser) is not enough to read large file like 4GB size file?
Yes, I don't know how easy it will be. You might not be able to support older browsers.
I have a 4GB size file want to upload.