Closed eric-addcream closed 4 years ago
Hey @eric-addcream, so sorry about the delay! Yes, I totally agree that this is a shortcoming of the library right now since any decently sized chunks are going to end up with a very choppy progress update profile. If we did go with a progress update for each chunk, how would you want that API to look from the event perspective? I think we should probably be doing the math in the library to figure out what the current percentage of each chunk is, then add that to the current total percentage.
Re: latency, we're working on this now! We think we've got a plan to improve that across the board.
Personally I will only be using the percentage right now, but I can see how 'uploaded bytes' might be interesting in other projects. If it's possible I don't see any downside to including total/uploaded bytes to make it even more flexible.
Good to hear you're working on the latency. Excited to see what it will bring!
Hi @mmcc
We just experienced the same issue (high latency after each chunk, and with large chunks choppy progress info). What is the status of improving the latency or providing progress info during chunk upload?
I've spent some time looking into this today. The first problem seems to be that there is no way that fetch
can report on uploaded bytes, so that's that. I was trying to use XMLHttpRequest
, but the problem is that you are not allowed to specify the Content-Length
header, and XMLHttpRequest
doesn't add the header when sending the blob. The Google API requires this header to be set, unless using Chunked Transfer encoding. And Transfer-Encoding
header is also disallowed being set by XMLHttpRequest
:(
I'm hitting the same issue. I thought the progress equals to uploadedBytes/totalBytes :(
@damechen It does equal uploaded/total, it's just only reported after each chunk is uploaded, which is why progress is updated in such a choppy fashion.
Say my video has size of 100MB, if chunk size is 1MB, the progress may be much smoother, but would there be any performance impact on such small chunk size?
I haven't done any benchmarking, but my guess is you wouldn't be killing performance by too much.
I think we actually can potentially use XHR or Axios instead. Axios would make the switch pretty easy, but it would add quite a bit to the bundle size.
Thanks Matt!!!
Re: latency, we're working on this now! We think we've got a plan to improve that across the board.
Anything done about this? There's still several seconds of delay between chunks.
We’re going to try out switching to XHR so we can get progress on each chunk request as they upload. We’re hoping to get to this next sprint, but in the meantime you might want to experiment with using a smaller chunk size to get more frequent progress updates.
Just cut the 2.0 release which includes intra-chunk progress updates! Give it a try and let me us know if you run into any issues.
This is so cool. I’ll definitely give it a try tomorrow!
Thanks, Damon
On Tue, Jun 9, 2020 at 8:52 PM Matthew McClure notifications@github.com wrote:
Just cut the 2.0 release https://github.com/muxinc/upchunk/releases/tag/v2.0.0 which includes intra-chunk progress updates! Give it a try and let me us know if you run into any issues.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/muxinc/upchunk/issues/9#issuecomment-641704010, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHSDO34SJIENUOY6L4QIBLRV37OFANCNFSM4HK7R5HA .
Currently, the "progress" event is only emitted after each chunk has been uploaded. My request is to emit this event while uploading each chunk as well.
AFAIK, this can't be accomplished with the native
fetch
function that this library uses. So this feature would require the use ofXMLHttpRequest
or a library like axios.The reason for this request is to be able to give feedback to the user more often while uploading larger chunks. Our uploads to Mux seem to have quite high latency (several seconds) after each chunk. To mitigate this we could increase the chunk size, but then we run into this problem with the lack of progress feedback.