gilbertchen / duplicacy

A new generation cloud backup tool
https://duplicacy.com
Other
5.12k stars 334 forks source link

duplicacy copy to b2 failing #460

Open daghub opened 6 years ago

daghub commented 6 years ago

I am experimenting with using duplicacy for my main backup, but running into some issues. It was working perfectly, but now I can no longer do a copy from my local backup to b2.

I have logged it with a single thread for visibility, and it seems like there are large gaps in the log of 7 minutes or so. I wonder what duplicacy is doing during these. My bandwidth seems ok, at 100/100 Mbits. log.txt

Linux Mint 17.3 Rosa duplicacy -d -log -profile localhost:2222 copy -to b2

(log attached)

daghub commented 6 years ago

Eventually the copy fails with

Failed to upload the chunk 97893b700dc8514e2394e794e62bcf47808170157f1c49df34bd411f63a6e231: Maximum backoff reached

This has been failing during my nightly run several times. Previously I uploaded all 80000+ chunks without a hitch. The only change I did was increase the selection of files (adding a symlink), causing a bunch of new chunks to be created. Can it be seen in the logs if this is a B2 issue?

daghub commented 6 years ago

duplicacy -version

VERSION: 2.1.0

gilbertchen commented 6 years ago
2018-07-12 09:36:13.180 DEBUG BACKBLAZE_UPLOAD URL request 'https://pod-000-1059-09.backblaze.com/b2api/v1/b2_upload_file/2022aa088d9b55ff611e021a/c000_v0001059_t0011' returned an error: Post https://pod-000-1059-09.backblaze.com/b2api/v1/b2_upload_file/2022aa088d9b55ff611e021a/c000_v0001059_t0011: EOF

This is either a network issue or a B2 issue. If it persists, the best option may be to increase the number of tries from 8 to 12:

https://github.com/gilbertchen/duplicacy/blob/dfdbfed64b7766616a05b9e0524b11556e789d3b/src/duplicacy_b2client.go#L528

daghub commented 6 years ago

Thank you for quick response! This is indeed a transient B2 or networking issue, now it is running much better, chewing away at OK speed. Then, after an hour I started getting the EOF post error, and it gave up because of the backoff limit.

I have this set as a cron job, running once every 24 hours. It would be cool to either allow more retries, or even better, exponential backoff up to a certain threshold (for example every 120 s), and then retrying periodically forever. That way the upload would resume once the network/cloud provider issue is sorted. And the cloud backend would not be overwhelmed in case the error was returned because of throttling/overload.

(Not sure how it is implemented at the moment)

Thank you for a really cool product BTW!

TowerBR commented 6 years ago

I think that a -number-of-retries option or something similar would be useful, and maybe even as part of the global options.

dreamflasher commented 5 years ago

This is a duplicate of: https://github.com/gilbertchen/duplicacy/issues/423