Berimor66 / duplicati

Automatically exported from code.google.com/p/duplicati
0 stars 0 forks source link

OnlineStorageSolution webdav does not support parallel backups #412

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1.Be a customer of onlinestoragesolutions
2.Schedule 2 backup tasks to be launched at the same time
3.Start both backups

What is the expected output? What do you see instead?
Only one backup at a time is permitted by ols using their webdav server. It 
would be nice if duplicati could detect that the initial backup failed when the 
next one starts and instead of waiting for the next scheduled event queue up 
the failed backup for execution after the one in process is finished.

What version of the product are you using? On what operating system?
1.1.99.766 x64

What backend (destination) are you using?
Webdav

Please provide any additional information below.
Current workaround include:
- manually scheduling backup so they don't happen at the same time. drwaback: 
if the backup takes a long time to complete another one may interupt it when it 
is scheduled to happen.
- adding a knob to indicate backup groups that should be queued instead of 
executed in parallel. Drawback: this adds an extra configuration step that 
makes using duplicati more complex to use than necessary

Original issue reported on code.google.com by eric.mo...@gmail.com on 4 May 2011 at 11:07

GoogleCodeExporter commented 9 years ago
I also do not like anything that needs to be configured. So here some questions 
that might lead to a solution without configuration:

What is the benefit if I could perform 2 backups at the same time to the same 
remote destination?

Would it make sense to allow or disallow parallel transfers for specific 
destinations?

Original comment by rst...@gmail.com on 5 May 2011 at 8:50

GoogleCodeExporter commented 9 years ago
Clarifying my initial request. In case multiple backups are allowed to the same 
destination, I would not change anything and keep it the way it is implemented 
today. What duplicati could do is to monitor same destination backups running 
simultaneously and if one of the other backup to that same destination fails in 
the few seconds following the simultaneous start of another backup, then it 
could serialize the backup that failed for execution after the current one (not 
using fixed time trigger).

Original comment by eric.mo...@gmail.com on 5 May 2011 at 6:21

GoogleCodeExporter commented 9 years ago
Duplicati will not run two backups in parallel, if it does, then that is a bug.

If you run them on two different machines, then that is a hard-to-solve 
coordination problem. Suppose the active backup leaves a file, what if it 
forgets to remove it? How long before it can be considered stale? etc.

Is this related to issue #389 ?

Original comment by kenneth@hexad.dk on 5 May 2011 at 8:00

GoogleCodeExporter commented 9 years ago
This is not related to issue #389.

Here is the log output for an interrupted differential backup when other 
scheduled backups start at the same time. Note that "Comptes" is the only one 
lasting a couple of hours here. This happened 2 days in a row and got resolved, 
after I performed a manual run outside of scheduled time frames.

You might be able to reproduce this behaviour on an OnlineStorageSolutions test 
account.

5/4/2011 1:04PM Legal 16.47KB
BackupType      : Incremental
BeginTime       : 05/04/2011 13:06:08
EndTime         : 05/04/2011 13:06:44
Duration        : 00:00:35.5170315
DeletedFiles    : 0
DeletedFolders  : 0
ModifiedFiles   : 0
AddedFiles      : 0
AddedFolders    : 0
ExaminedFiles   : 31
OpenedFiles     : 0
SizeOfModified  : 0
SizeOfAdded     : 0
SizeOfExamined  : 0
Unprocessed     : 0
TooLargeFiles   : 0
FilesWithError  : 0
OperationName   : Backup
BytesUploaded   : 16864
BytesDownloaded : 4382
RemoteCalls     : 12

Cleanup output:

5/4/2011 1:04PM Correspondance 16.48KB
BackupType      : Incremental
BeginTime       : 05/04/2011 13:04:56
EndTime         : 05/04/2011 13:06:04
Duration        : 00:01:08.5979236
DeletedFiles    : 0
DeletedFolders  : 0
ModifiedFiles   : 0
AddedFiles      : 0
AddedFolders    : 0
ExaminedFiles   : 251
OpenedFiles     : 0
SizeOfModified  : 0
SizeOfAdded     : 0
SizeOfExamined  : 0
Unprocessed     : 0
TooLargeFiles   : 0
FilesWithError  : 0
OperationName   : Backup
BytesUploaded   : 16880
BytesDownloaded : 10273
RemoteCalls     : 12

Cleanup output:

5/4/2011 1:04PM gitolite 16.48 KB
BackupType      : Incremental
BeginTime       : 05/04/2011 13:04:00
EndTime         : 05/04/2011 13:04:50
Duration        : 00:00:49.6008370
DeletedFiles    : 0
DeletedFolders  : 0
ModifiedFiles   : 0
AddedFiles      : 0
AddedFolders    : 0
ExaminedFiles   : 288
OpenedFiles     : 0
SizeOfModified  : 0
SizeOfAdded     : 0
SizeOfExamined  : 0
Unprocessed     : 0
TooLargeFiles   : 0
FilesWithError  : 0
OperationName   : Backup
BytesUploaded   : 16880
BytesDownloaded : 3840
RemoteCalls     : 12

Cleanup output:

5/4/2011 1:04PM Comptes
Error: Failed to upload file: Unable to write data to the transport connection: 
An existing connection was forcibly closed by the remote host.

Error: System.Exception: Failed to upload file: Unable to write data to the 
transport connection: An existing connection was forcibly closed by the remote 
host. ---> System.IO.IOException: Unable to write data to the transport 
connection: An existing connection was forcibly closed by the remote host. ---> 
System.Net.Sockets.SocketException: An existing connection was forcibly closed 
by the remote host
   at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   --- End of inner exception stack trace ---
   at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
   at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
   at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at System.Net.ConnectStream.InternalWrite(Boolean async, Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
   at System.Net.ConnectStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at Duplicati.Library.Utility.Utility.CopyStream(Stream source, Stream target, Boolean tryRewindSource)
   at Duplicati.Library.Backend.WEBDAV.Put(String remotename, Stream stream)
   at Duplicati.Library.Main.BackendWrapper.PutInternal(BackupEntryBase remote, String filename)
   --- End of inner exception stack trace ---
   at Duplicati.Library.Main.BackendWrapper.PutInternal(BackupEntryBase remote, String filename)
   at Duplicati.Library.Main.BackendWrapper.Put(BackupEntryBase remote, String filename)
   at Duplicati.Library.Main.Interface.Backup(String[] sources)
   at Duplicati.GUI.DuplicatiRunner.ExecuteTask(IDuplicityTask task)
InnerError: System.IO.IOException: Unable to write data to the transport 
connection: An existing connection was forcibly closed by the remote host. ---> 
System.Net.Sockets.SocketException: An existing connection was forcibly closed 
by the remote host
   at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   --- End of inner exception stack trace ---
   at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
   at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
   at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at System.Net.ConnectStream.InternalWrite(Boolean async, Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
   at System.Net.ConnectStream.Write(Byte[] buffer, Int32 offset, Int32 size)
   at Duplicati.Library.Utility.Utility.CopyStream(Stream source, Stream target, Boolean tryRewindSource)
   at Duplicati.Library.Backend.WEBDAV.Put(String remotename, Stream stream)
   at Duplicati.Library.Main.BackendWrapper.PutInternal(BackupEntryBase remote, String filename)
InnerError: System.Net.Sockets.SocketException: An existing connection was 
forcibly closed by the remote host
   at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)

Original comment by eric.mo...@gmail.com on 6 May 2011 at 11:24

GoogleCodeExporter commented 9 years ago
I can't see if "Comptes" runs in parallel with the others?

The other backups seems to run right after one another, which I believe is the 
correct way to do it. I am guessing that OLS has a limit to the number of 
connections, which gets triggered sometimes (eg. a DDOS protection system). If 
this is the case, then it may be fixable once Hugh gets the "persistent 
connection" code in place, as that will allow Duplicati to re-use a connection 
to send multiple files.

Does this sound plausible?

Original comment by kenneth@hexad.dk on 8 May 2011 at 3:40

GoogleCodeExporter commented 9 years ago
Indeed, the short lived backups that essentially just check that nothing needs 
to be done on the remote end are running one after another. For "Comptes" this 
seems to be somehow interrupted if I run it at the scheduled time. The " 
persistent connection" fix seems fine on paper, although I would definitely try 
it for real. Let me know if you are unable to reproduce with a test account or 
if there is anyting I can do on my end to gather better debug output.

Original comment by eric.mo...@gmail.com on 9 May 2011 at 5:29

GoogleCodeExporter commented 9 years ago
Does the problem still exist?

Original comment by rst...@gmail.com on 31 Mar 2012 at 8:08

GoogleCodeExporter commented 9 years ago
I guess this is either fixed or there is no way to reproduce as the original 
reporter lost interest. Give us a notice when the problem still exists.

Original comment by rst...@gmail.com on 18 Apr 2012 at 6:03