pionl / laravel-chunk-upload

The basic implementation for chunk upload with multiple providers support like jQuery-file-upload, pupload, DropZone and resumable.js
MIT License
617 stars 167 forks source link

$save->isFinished() always false #37

Closed sagarrabadiya closed 6 years ago

sagarrabadiya commented 6 years ago

Thanks for great package, i am facing a problem in both dropzone and resumable.js i used exact same code from example, everything works so far but the

$save = $receiver->receive();
 // check if the upload has finished (in chunk mode it will send smaller files)
 if ($save->isFinished()) {

above condition is always fail, and in last chunk response

$handler->getPercentageDone()

this gives 100 percent but as isFinished() gives false so it doesn't upload actual file. i have tried with both dropzone and resumable.js both gives same result.

pionl commented 6 years ago

Hi, thank you 👍

can you share the JS code you are using? Also, could you check if chunks are created?

Martin

sagarrabadiya commented 6 years ago

image

it is pretty much standard js nothing special i am using and yes i can see chunk directory is created under storage/app

sagarrabadiya commented 6 years ago

i can see the calls are being made to server for chunks by dropzone, and response are coming in the percentage, as i am using exact same backend implementation given in example so each chunk upload request's response is percentage but in last chunk it should give different response but currently its giving 100 as percentage so isFinished() give false

sagarrabadiya commented 6 years ago

i have checked and chunks are not created? so what could be the problem?

sagarrabadiya commented 6 years ago

changing the chunk disk to public works.

vedmant commented 6 years ago

I have the same issue, it's stopped working after I changed storage.disk option to a custom disk. I use resumable.js.

pionl commented 6 years ago

I will investigate this issue - will check why different disk is not working. @sagarrabadiya Can you show me you disk config so I can test it on your settings? Thank you

vedmant commented 6 years ago

In my case that's just another local disk:

    'shared' => [
        'driver' => 'local',
        'root' => storage_path('shared'),
    ],
sagarrabadiya commented 6 years ago

i used public disc but that didnt work so i canhed to local disk and made chunk folder permission 777 and then it started creating chunks

vedmant commented 6 years ago

In my case all chunks are stored normally but after uploading the last one isFinished() is still false, so it can't complete uploading and assemble final file. I will investigate it further next week and can make a pull request if I find a solution.

pionl commented 6 years ago

@vedmant Did you use simul. upload? New version fixes this: #38

@sagarrabadiya public dir should not be used for this while using Storage. This will be probably this issue. But I will try it too and update the wiki for further use.

vedmant commented 6 years ago

@pionl I use default resumable.js setting for this, by default it has 3 simultaneous uploads. I use latest version 1.2.4. On my server I use multiple php-fpm containers and shared mount disk between them so all containers have access to all chunks. Sessions are shared through DB.

Saarnaki commented 6 years ago

I also confirm this issue with ResumableJS. In localhost (OSX with Xampp) everything works nicely, but in production (Centos 7 & nginx) it would always returns false.

During my several hours of investigation (and banging my head against the wall because of this issue) I seem to have narrowed down the cause - ParallelSave.php method handleChunk().

It appears that on my production machine the final chunk test fails. Haven't gotten around to test it whether the issue is that $files->count() + 1 behaves somehow differently on nginx or $this->totalChunks is messed up.

In my case I solved it like this:

And now it's working. Maybe not the best solution but works for me as a quick solution since our product is in beta and already used by many users.

sagarrabadiya commented 6 years ago

i would love to see this issue fixed, this causes many messes 😆 @pionl were you able to regenerate this problem? i will advise you to try to reproduce it under linux environment with nginx. for me also linux nginx environment got the problem changing simultaneousUploads to 1 works, but having multiple chunks makes upload faster (so a bit sacrifice on speed).

pionl commented 6 years ago

I use vagrant with linux + apache + php 7.0 and works as expected.

But I've found incomplete implementation (bugs):

  1. The files detection where missing correct filtering, if there was any additional un-completeded chunks from different file it caused bug. -> Can be fixed (I've already done on my repo)
  2. Percentage are not correct - uses the chunk number, if I use file count, it can return 100% on multiple requests. Leaving the chunk count
  3. The second problem is, that if I send smaller file (max. 3 sim. uploaded set, file is separated into 3 chunks). At the same time, all files are created and I can't detect which request is "final" because all request can detect 3 uploaded files. Even using lock file will not help (tried). (@Saarnaki and @sagarrabadiya reports) The solution could be:

a. Use queue which would solve the issue - not ideal if you need the response b. Use only last chunk for detection - this is probably not ideal but only solution I've found. If I recall correctly from the docs, the last chunk could not be sent in last request. But at this moment it is ok. This solution will not work if resumable uploads will be implemented.

At this moment, I do not have to much time. Even this implementation was made on request between projects. Sorry for the mistakes, while testing large files the error didn't occur. Unfortunately I do not use this package at this moment in any project so the focus on changing things is hard to find.

At least this morning I've fixed this issue (at least 2.5h), please try the master branch and let me know. Now works (large/small files)

The different storage is not implemented at this moment. If anyone needs this functionality, please make a PR. The original issue from @sagarrabadiya. It is recommended by the laravel framework to use the storage/app folder and make a symlink to used folder

pionl commented 6 years ago

As for me, the change works. I've released new version to prevent any further issues for new users. Let me know if works better now. Thanks

Version: 1.2.6

sagarrabadiya commented 6 years ago

i still am having issue with resumablejs parellel chunk uploads and didn't have time to send PR to this package, so i moved to https://github.com/dilab/resumable.php package which i think is official php backend for resumable.js, will probably send PR to this package later to fix the problem, thanks for your efforts and time @pionl 👍

pionl commented 6 years ago

Strange, by checking the code it and works similar as previous implementation (just checking if chunk exists by number of chunks), there should be similar issues while sending small chunk (only 3 chunks by using 3 parallel requests).

Sorry for the troubles. Currently I'm not able to reproduce the bug (using chunk-example). How big file you are testing? Make sure you are using storage/app folder (by default config) and it should work.

sagarrabadiya commented 6 years ago

yes all the configs are default and using 4 chunks parellel. i am using upload endpoint as api with laravel passport oauth2 so may be that is the issue as it uses session id to make file unique but not so sure about it.

pionl commented 6 years ago

It should automatically detect if session is ready and fallback to browser agent. Still resumable sends also it's custom identifier so it should be ok.

If you have time, it should be easy to debug the flow or I can add Log::debug logs to help identify the problem.

I would check this line for the number of chunks detected on each request: https://github.com/pionl/laravel-chunk-upload/blob/master/src/Save/ParallelSave.php#L103