muxinc / upchunk

Uploads Chunks! Takes big files, splits them up, then uploads each one with care (and PUT requests).
MIT License
336 stars 46 forks source link

'progress' event is triggered right after 'success' event #51

Open dushajni opened 3 years ago

dushajni commented 3 years ago

As I wrote in the title, I noticed that progress event is triggered after upload is completed. In that way I have wrong info that upload is maybe not finished.

I consol.logged output and made this screenshot: SCREENSHOT

mmcc commented 3 years ago

Thanks for the report! We should be cancelling all event handlers after success, but could you tell me more about what this is causing in practice?

dushajni commented 3 years ago

Thanks for the report! We should be cancelling all event handlers after success, but could you tell me more about what this is causing in practice?

Sure. I've implemented mux inside of react / redux, and I'm dispatching actions for reducer on each event handler. So if I dispatch action for "upload completed" and after that I fire "upload in progress" it will make a mess to my logic. Upload in progress !== upload completed, so when I'm checking if download is in progress to execute some code, this additional "in progress" event will mislead.

ramiel commented 2 years ago

Same for me. I use progress to show the percentage and nullify it on success. A new progress event set again the percentage showing the wrong ui

adibbz commented 1 year ago

Running into a similar issue. progress events continue to fire after I've paused the upload. Creating weird effects in my UI.

dylanjha commented 1 year ago

Thanks for re-surfacing @adibbz -- also want to just double check with folks on this thread if it's still happening on the latest release that went out 9 days ago: https://github.com/muxinc/upchunk/releases/tag/v3.0.0

There was a significant refactoring to improve the internals of how files were consuming memory (#89), I don't think it touched anything with how events were fired, but would be good to confirm.

89 was our highest priority fix and now that's merged I'm hoping that will free up some bandwidth for other fixes like this one.