iterate-ch / cyberduck

Cyberduck is a libre FTP, SFTP, WebDAV, Amazon S3, Backblaze B2, Microsoft Azure & OneDrive and OpenStack Swift file transfer client for Mac and Windows.
https://cyberduck.io/
GNU General Public License v3.0
3.28k stars 290 forks source link

Better handling for failed 0-byte file uploads #13762

Open Statick opened 2 years ago

Statick commented 2 years ago

Occasionally a file uploaded to a vault in Google Drive through Mountain Duck (in online mode) will later be found to not have uploaded successfully. The file appeared to upload correctly without errors, but subsequent directory listings do not show it in directory listings in either Mountain or Cyberduck. However if you then attempt to upload the file again at a later date, either through Mountain or Cyberduck, the upload fails, in MD the error returned through Windows is simply "access denied", however in Cyberduck we see a bit more - the overwrite dialog appears, followed by an error about existing file length being -88 bytes. Remembering that there's an 88 byte overhead on encrypted files, this got me thinking that a file length of -88 implies the actual file stored is zero bytes, so if you subtract the 88 byte overhead from the file length you will end up with -88 as the file length. Sure enough if I open Google Drive in a browser and navigate to the encrypted folder which has a problem (taking the URL from Cyberduck), scrolling through the directory listing there's a couple of .c9r files in there which have a file size of 0 bytes. These are obviously failed files as not even the 88 byte header has been uploaded, and the persistence of this 0-byte file prevents any further reupload of that same file. If I then manually delete these 0-byte files from the Google Drive browser, and restart MD or CD, the files can finally be reuploaded

It would be great if both MD and CD could handle this error better - any 0-byte .c9r file on the cloud storage is definitely a corrupt file that can safely be deleted, and attempting to overwrite one of these should never fail with errors. In a recent cloud backup of approx 50,000 files I have about 200 that have not uploaded correctly, all of them showing this same problem, which currently means manually navigating to each encrypted folder, scrolling through to find the 0-byte files (google does not let you sort the page by file size), then manually deleting them, which is a very slow process as they are randomly scattered across dozens of different folders

Statick commented 2 years ago

example shown these files are failed uploads, they prevent further reuploads of the same files because attempts to overwrite them fail with errors, and must be manually deleted through the Drive browser interface. they are obviously corrupt as no .c9r file should ever be zero bytes long

Screenshot 2022-09-29 at 12-11-31 PUZE3PBWJMTJJ7JCA2PSR2UZFG5QWE - Google Drive

keliew commented 2 months ago

I also face a similar situation (in online mode). A file is moved to Mountain Duck WebDAV folder, sometimes renamed, and then in the server logs (Synology), it becomes 0 bytes. It happens occassionally. If I wait a few seconds before doing anything to the file, e.g. rename, or save, then it's fine.