Open ts678 opened 3 years ago
Am I interpreting this correctly? From these log entries
2021-04-19 18:53:48 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-ie0e9f8d244554866a36f83a6e23f16ce.dindex.zip (27.70 KB)
2021-04-19 18:53:48 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
2021-04-19 18:54:02 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
it seems that the process was killed between 2021-04-19 18:53:48 -04
and 2021-04-19 18:54:02 -04
. Since the
[Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
was not followed by a corresponding
[Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAfterUpload took x:xx:xx:xx.xxx
it looks to me that the process was killed either during the transaction commit, or during the execution of the finally
block when the Timer
instance is disposed. Given that the database is missing the file entry, it seems that the process was killed exactly while the transaction was being committed. This could explain the missing file in the database.
The CommitAfterUpload took
seems to average maybe 0.2 seconds, I think I saw a 0.6 in the log, but many time at 0.000.
I'm not sure how the reported timings interact with the way the commit is implemented, but the code doing commit is this:
It logged VerifyConsistency at 18:53:44, so I'm guessing it died in the CommitTransactionAsync below:
Despite the code comments about sending the filelist and waiting for completion, the last two files at destination are 6:53:48 duplicati-ie0e9f8d244554866a36f83a6e23f16ce.dindex.zip duplicati-b4e6d3a94c2d84d4fb60eb0989cc4da26.dblock.zip
OperationTable backup times are: 24 6:53:06 25 6:53:42 (interrupted) 26 6:54:03 (final failure)
I would guess 6:53:48 was process kill time because that's the last log time and also the DB modification time on the .sqlite file. Times don't make precise sense, but the process exit is polled each second, and there's a 10 second sleep done between cycles. There's also a little bit of startup time of Duplicati.CommandLine.exe before it gets far enough along to record the backup start.
One area that's a little hazy to me is what happens to the -journal file during all this. This time I only browsed DB after a failure. The DB rotation is in a run-script-before, and copies the .sqlite files down, so the no-extra-suffix one is copied to a .1.sqlite one. I'm wondering if browsing of the interrupted backup's DB would look different if there was a .1.sqlite-journal doing DB rollback.
Next backup job (similar to original, but with kill percent chance reduced from 5 to 2) failed in a similar fashion, but on a dblock:
Profiling log view of start and (maybe) end of backups, and of the dblock in question:
2021-04-20 09:25:19 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 09:25:39 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 09:25:52 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:13.432
2021-04-20 09:26:07 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 09:26:15 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:07.978
2021-04-20 09:26:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 09:26:41 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:12.603
2021-04-20 09:26:56 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 09:26:57 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (36, "duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 09:26:57 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (36, "duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2021-04-20 09:26:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip (168.87 KB)
2021-04-20 09:27:12 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 09:27:13 -04 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip
2021-04-20 09:27:13 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:01.162
Profiling log transition between kill and next backup:
2021-04-20 09:26:58 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: UpdateChangeStatistics took 0:00:00:00.009
2021-04-20 09:26:58 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2021-04-20 09:26:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip (168.87 KB)
2021-04-20 09:26:58 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitUpdateRemoteVolume
2021-04-20 09:26:58 -04 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 168.87 KB in 00:00:00.0205004, 8.04 MB/s
2021-04-20 09:27:12 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
Console output of last two backups, first killed, second failing on file seen as not in DB:
Cycle 36
Backup started at 4/20/2021 9:26:54 AM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (67 bytes)
Scanning local files ...
0 files need to be examined (0 bytes)
Backend event: Put - Started: duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip (168.87 KB)
Uploading file (168.87 KB) ...
SUCCESS: The process "Duplicati.CommandLine.exe" with PID 17996 has been terminated.
-17644 finished with status=256 exit code=1
Cycle 37
Backup started at 4/20/2021 9:27:11 AM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (68 bytes)
Extra unknown file: duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip
Found 1 remote files that are not recorded in local storage, please run repair
Fatal error => Found 1 remote files that are not recorded in local storage, please run repair
ErrorID: ExtraRemoteFiles
Found 1 remote files that are not recorded in local storage, please run repair
-23840 finished with status=25600 exit code=100
There are no journal files right now, but that might be because the failed backup maybe cleaned it up on exit or on DB open. Database file time stamps before opening with DB Browser for SQLite (a.k.a sqlitebrowser if you install it as a Linux package): 9:26:58 AM KNNDCXNQKV.1.sqlite 9:27:13 AM KNNDCXNQKV.sqlite
duplicati-b51f7a7f173d24147a5874a53f3fdd413.dblock.zip is at the destination with timestamp 9:26:58
As a side note, I suspect Repair would fix up these extra unknown files easily by deleting them, but I haven't tested that yet...
Will continue running to see if I can get a harder-to-fix error. First one yesterday resisted Repair and Recreate (will all dblocks) however it yielded to list-broken-files and purge-broken-files. Unfortunately I didn't have enough data collection set up yet...
It may be necessary to fix some of the more common problems to reduce the noise levels, and expose the less common ones.
Next backup job is the same plan, and failed similarly to previous.
Profiling info on the dblock that got lost track of, and Running Backup lines for context:
2021-04-20 10:43:49 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:09.194
2021-04-20 10:44:05 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 10:44:24 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 10:45:11 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 10:45:24 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:12.917
2021-04-20 10:45:38 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 10:45:39 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (54, "duplicati-bfec5cd45d4b24e4089edfc01c5b0c036.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 10:45:39 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (54, "duplicati-bfec5cd45d4b24e4089edfc01c5b0c036.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2021-04-20 10:45:41 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bfec5cd45d4b24e4089edfc01c5b0c036.dblock.zip (2.49 MB)
2021-04-20 10:45:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 10:45:56 -04 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-bfec5cd45d4b24e4089edfc01c5b0c036.dblock.zip
2021-04-20 10:45:56 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:01.152
Profiling log view of kill and transition to next backup:
2021-04-20 10:45:41 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: UpdateChangeStatistics took 0:00:00:00.009
2021-04-20 10:45:41 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2021-04-20 10:45:41 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bfec5cd45d4b24e4089edfc01c5b0c036.dblock.zip (2.49 MB)
2021-04-20 10:45:41 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitUpdateRemoteVolume
2021-04-20 10:45:55 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2021-04-20 10:45:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
Console ending:
Backup completed successfully!
-22056 finished with status=0 exit code=0
Cycle 57
Backup started at 4/20/2021 10:45:37 AM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (82 bytes)
Scanning local files ...
0 files need to be examined (0 bytes)
Backend event: Put - Started: duplicati-bfec5cd45d4b24e4089edfc01c5b0c036.dblock.zip (2.49 MB)
Uploading file (2.49 MB) ...
SUCCESS: The process "Duplicati.CommandLine.exe" with PID 19724 has been terminated.
-23216 finished with status=256 exit code=1
Cycle 58
Backup started at 4/20/2021 10:45:54 AM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (83 bytes)
Extra unknown file: duplicati-bfec5cd45d4b24e4089edfc01c5b0c036.dblock.zip
Found 1 remote files that are not recorded in local storage, please run repair
Fatal error => Found 1 remote files that are not recorded in local storage, please run repair
ErrorID: ExtraRemoteFiles
Found 1 remote files that are not recorded in local storage, please run repair
-16904 finished with status=25600 exit code=100
It logged VerifyConsistency at 18:53:44, so I'm guessing it died in the CommitTransactionAsync below:
I agre. Based on the implementation of Timer.Dispose, I still suspect that the process was killed at the line I had linked to earlier (which is in CommitTransactionAsync
):
Our SQLite dependencies are still unclear to me. Do you know what version is being used in your test cases?
Do you know what version is being used in your test cases?
https://github.com/duplicati/duplicati/commits/master/thirdparty/SQLite history suggests Aug 2, 2019 versions are shipped, and installed file Properties Details says 1.0.111.0 which is documented as the latest at that time, and corresponding to SQLite 3.28.0.
Fourth error looks kind of like the first one, in that the lost file is a dindex. I might try tweaking the next tests (if I run them) to see whether I can get more than 1 missing file. I'm not sure exactly when the DB commits happen in this sequence, however I wonder whether anything commits any of the Remotevolume table information prior to the kill. This might be a pretty large window to hit.
Profiling look at problem file, with some context:
2021-04-20 12:05:38 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 12:06:13 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:35.679
2021-04-20 12:06:27 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 12:06:49 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 12:06:57 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (96, "duplicati-i716f744918be415983767d75feebc605.dindex.zip", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 12:06:57 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (96, "duplicati-i716f744918be415983767d75feebc605.dindex.zip", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.001
2021-04-20 12:06:57 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i716f744918be415983767d75feebc605.dindex.zip (27.61 KB)
2021-04-20 12:06:57 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i716f744918be415983767d75feebc605.dindex.zip (27.61 KB)
2021-04-20 12:07:12 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 12:07:13 -04 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-i716f744918be415983767d75feebc605.dindex.zip
2021-04-20 12:07:13 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:01.175
Profiling log transition between interrupted backup and final failing one:
2021-04-20 12:06:57 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
2021-04-20 12:06:57 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAfterUpload took 0:00:00:00.118
2021-04-20 12:06:57 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (96, "duplicati-i716f744918be415983767d75feebc605.dindex.zip", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 12:06:57 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (96, "duplicati-i716f744918be415983767d75feebc605.dindex.zip", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.001
2021-04-20 12:06:57 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i716f744918be415983767d75feebc605.dindex.zip (27.61 KB)
2021-04-20 12:06:57 -04 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 27.61 KB in 00:00:00.0019518, 13.81 MB/s
2021-04-20 12:06:57 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i716f744918be415983767d75feebc605.dindex.zip (27.61 KB)
2021-04-20 12:06:57 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
2021-04-20 12:07:12 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2021-04-20 12:07:12 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
Console output:
Cycle 102
Backup started at 4/20/2021 12:06:48 PM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (169 bytes)
keeping protected incomplete remote file listed as Uploading: duplicati-20210420T160627Z.dlist.zip
promoting uploaded complete file from Uploading to Uploaded: duplicati-b4d705aa59f5a45b6837eb094bd7c6ba6.dblock.zip
Uploading filelist from previous interrupted backup
Re-creating missing index file for duplicati-b4d705aa59f5a45b6837eb094bd7c6ba6.dblock.zip
Scanning local files ...
0 files need to be examined (0 bytes)
Backend event: Put - Started: duplicati-i37291813253a48b6b8ce574601308602.dindex.zip (27.61 KB)
Uploading file (27.61 KB) ...
Backend event: Put - Completed: duplicati-i37291813253a48b6b8ce574601308602.dindex.zip (27.61 KB)
Error reported while accessing file: C:\Users\Ted\AppData\Roaming\Mozilla\Firefox\Profiles\0o1kzrw7.default-release-2-1607095340711\sessionstore-backups\recovery.jsonlz4.tmp => Could not find file '\\?\C:\Users\Ted\AppData\Roaming\Mozilla\Firefox\Profiles\0o1kzrw7.default-release-2-1607095340711\sessionstore-backups\recovery.jsonlz4.tmp'.
Backend event: Put - Started: duplicati-baf32cdfb856a4b569ab1c3d0c8ef5068.dblock.zip (2.49 MB)
Uploading file (2.49 MB) ...
Backend event: Put - Completed: duplicati-baf32cdfb856a4b569ab1c3d0c8ef5068.dblock.zip (2.49 MB)
Backend event: Put - Started: duplicati-i716f744918be415983767d75feebc605.dindex.zip (27.61 KB)
Uploading file (27.61 KB) ...
Backend event: Put - Completed: duplicati-i716f744918be415983767d75feebc605.dindex.zip (27.61 KB)
SUCCESS: The process "Duplicati.CommandLine.exe" with PID 19444 has been terminated.
ERROR: The process "Duplicati.Commandline.exe" not found.
-21360 finished with status=256 exit code=1
Cycle 103
Backup started at 4/20/2021 12:07:10 PM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (172 bytes)
removing file listed as Temporary: duplicati-20210420T160627Z.dlist.zip
scheduling missing file for deletion, currently listed as Uploading: duplicati-20210420T160628Z.dlist.zip
keeping protected incomplete remote file listed as Uploading: duplicati-20210420T160650Z.dlist.zip
Extra unknown file: duplicati-i716f744918be415983767d75feebc605.dindex.zip
Found 1 remote files that are not recorded in local storage, please run repair
Fatal error => Found 1 remote files that are not recorded in local storage, please run repair
ErrorID: ExtraRemoteFiles
Found 1 remote files that are not recorded in local storage, please run repair
-3928 finished with status=25600 exit code=100
This last one again looks like the process was killed while during the transaction commit. Not sure what kind of mitigations can be done for that situation.
This last one again looks like the process was killed while during the transaction commit.
I wish there was a DB expert around. Anyone? My understanding of the transaction concept is that it supports atomicity (and maybe more) where the changes done in a transaction either all "stick" (commit), or all don't (rollback). You don't have to die in the middle of database internal code to lose uncommitted changes, but rollback is a DB feature, not a bug. Rollback (if required) happens at restart.
Transactions also support consistency, as tables reference each other, yet are changed individually. A transaction holds changes to do as one group in order to maintain consistent table relationships. It's sort of like code locking a data structure while rearranging things, however code usually operates on volatile memory, so there's no persistent consistent data to maintain across crashes. It all vanishes.
Atomic Commit In SQLite is a general description of SQLite design. 4.2. Hot Rollback Journals is probably the situation on program kill. 4.5. Deleting The Hot Journal says how hot journal is deleted after it does its job. 2.1. Rollback Journals describes how an application, at start (or maybe DB open).
uses the information in the journal to restore the database to its state prior to the start of the incomplete transaction
For a program that is so database-centric, I really wish someone with DB skills would volunteer to help, or original author would assist.
Isn’t the problem here about keeping a database consistent with a remote filesystem? The PUT operation was successful, but the insert into the database was interrupted. Regardless of how the database insert is performed (transaction or not), the file already exists on the backend and the insert was not successful.
At least, that’s my extremely simplified view of it.
Reducing dblock size from 50MB to 1MB gets more dblock files going, but still errors with only 1 missing file, which possibly is due to the way the DB commit work is landing. While doing this test, I also stopped it occasionally after good backups to see if Recreate was happy, given all the mayhem that kills were causing. It was, and I commend Duplicati for putting up with an unusual amount of abuse successfully, except for this issue, where after one failure I verified that Repair could indeed remove the extra file, lowering fix urgency.
Understanding the transaction design would still be very helpful, because I suspect such errors are behind a lot of the DB problems...
Profiling log for the troubled file (with backup context):
2021-04-20 17:14:10 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 17:14:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:10.874
2021-04-20 17:14:35 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 17:15:00 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 17:15:05 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 17:15:05 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.001
2021-04-20 17:15:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip (661.35 KB)
2021-04-20 17:15:20 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
2021-04-20 17:15:26 -04 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip
2021-04-20 17:15:26 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:05.939
Profiling log transition from interrupted to failing backup:
2021-04-20 17:15:05 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: UpdateChangeStatistics took 0:00:00:00.055
2021-04-20 17:15:05 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2021-04-20 17:15:05 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitUpdateRemoteVolume
2021-04-20 17:15:20 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2021-04-20 17:15:20 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunBackup]: Starting - Running Backup
Simply based on profiling messages so far, it does appear that the kill is during the commit. I'm not sure how long a commit takes, because I "think" Duplicati is using the default journal plan which involves a rollback journal. Changes are written into the main DB, and 2. How WAL Works says in our mode a commit just needs deleting the rollback journal (because it won't be used). Sounds fast.
Console output:
Cycle 66
Backup started at 4/20/2021 5:14:58 PM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (1.17 KB)
Scanning local files ...
0 files need to be examined (0 bytes)
Backend event: Put - Started: duplicati-b324447273557471ebb040402dcd82a48.dblock.zip (960.37 KB)
Uploading file (960.37 KB) ...
Backend event: Put - Started: duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip (978.35 KB)
Uploading file (978.35 KB) ...
Backend event: Put - Started: duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip (661.35 KB)
Uploading file (661.35 KB) ...
SUCCESS: The process "Duplicati.CommandLine.exe" with PID 61084 has been terminated.
-60428 finished with status=256 exit code=1
Cycle 67
Backup started at 4/20/2021 5:15:19 PM
The operation Backup has started
Checking remote backup ...
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (1.17 KB)
keeping protected incomplete remote file listed as Temporary: duplicati-20210420T211501Z.dlist.zip
removing incomplete remote file listed as Uploading: duplicati-b324447273557471ebb040402dcd82a48.dblock.zip
Backend event: Delete - Started: duplicati-b324447273557471ebb040402dcd82a48.dblock.zip ()
Deleting file duplicati-b324447273557471ebb040402dcd82a48.dblock.zip ...
Backend event: Delete - Completed: duplicati-b324447273557471ebb040402dcd82a48.dblock.zip ()
removing incomplete remote file listed as Uploading: duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip
Backend event: Delete - Started: duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip ()
Deleting file duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip ...
Backend event: Delete - Completed: duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip ()
Extra unknown file: duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip
Found 1 remote files that are not recorded in local storage, please run repair
Fatal error => Found 1 remote files that are not recorded in local storage, please run repair
ErrorID: ExtraRemoteFiles
Found 1 remote files that are not recorded in local storage, please run repair
-61772 finished with status=25600 exit code=100
Triple dblock upload attempt that was interrupted raises an interesting DB (and maybe transaction) question, which is why the first two dblocks were found on final run as present and in State Uploading, and so were removed at its start, whereas the last dblock is unknown. The interrupted RemoteOperation table viewed in a browser confirms the first two end the table, and are Uploading. If it matters, the test runs unencrypted and has a 1MB upload and download throttle so that file transfers aren't overly instantaneous.
All three files are size 327680 in the list
on the last (failed) backup. The first two were deleted. Third remains and is size 327680.
Isn’t the problem here about keeping a database consistent with a remote filesystem?
Presumably. I'm not a DBA, and don't know if there are standard methods for trying to solve this problem. The database also has rather coarse tools. If this were program code, one might want to flush some files and not others. I'm not sure DB is fine-grained.
The PUT operation was successful, but the insert into the database was interrupted.
It may vary by case. In the one above, none of the three dblock put operations finished, yet the first two are in the DB somehow. Representing that as State
"Uploading" seems accurate, but something got the the first two in RemoteOperation table without assistance from the CommitTransactionAsync (which seems to happen only once, and seems aimed at the filelist as noted earlier).
If you ask "what filelist?", great question. With all the queues and parallelism, it's hard to say what's in progress where and when.
Regardless of how the database insert is performed (transaction or not), the file already exists on the backend and the insert was not successful.
The first two dblock file inserts were done, and stuck. The files can be seen in the above logs starting at Temporary, and something moves them to Uploading. BackendManager has a bunch of possible spots. However the first two were handled, why wasn't third?
EDIT:
To take a wild guess, it might have to do with my first point about coarse commits. The early ones may have benefited from commit done for a different reason. I use the glogg
tool to look at logs, and it can do filtering by regular expressions. I used the rough one:
duplicati-b324447273557471ebb040402dcd82a48.dblock.zip|bca530e64d7904425886192cafdcc4c7a.dblock.zip|bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip|CommitTransactionAsync
and got
2021-04-20 17:15:03 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-b324447273557471ebb040402dcd82a48.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 17:15:03 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-b324447273557471ebb040402dcd82a48.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2021-04-20 17:15:03 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 17:15:03 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2021-04-20 17:15:04 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2021-04-20 17:15:05 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:00.432
2021-04-20 17:15:05 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2021-04-20 17:15:05 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:00.000
2021-04-20 17:15:05 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-20 17:15:05 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (63, "duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.001
2021-04-20 17:15:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b324447273557471ebb040402dcd82a48.dblock.zip (960.37 KB)
2021-04-20 17:15:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bca530e64d7904425886192cafdcc4c7a.dblock.zip (978.35 KB)
2021-04-20 17:15:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bd71f4e4848fe4a979dc84407cacae5f9.dblock.zip (661.35 KB)
2021-04-20 17:15:05 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitUpdateRemoteVolume
Steps to reproduce this, watch it in debugger, inspect the DB.
Create small job, not encrypted, one short file, local folder.
If you don't already run server profiling log, make a job one.
Look at Database screen to see what path the DB will be using.
Attach debugger to Duplicati child TrayIcon or Server process.
Set a breakpoint at CommitTransactionAsync.
Run backup.
Hit breakpoint.
Profiling log might show:
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: UpdateChangeStatistics took 0:00:00:00.002
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2021-04-21 11:36:28 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b93e7e1a38886463ea61c523756a0b798.dblock.zip (684 bytes)
Thread might be like
0x0001511C 0x0000001D Thread Pool Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Common.DatabaseCommon.CommitTransactionAsync(string message, bool restart) (IL=0x001B, Native=0x00007FFF5A4C0D50+0x89) Normal 0000000000001111 0x00000000 Duplicati.GUI.TrayIcon.exe (id = 79204) [1] Duplicati.GUI.TrayIcon.exe Background, ThreadPool
Stack might be like
> Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Common.DatabaseCommon.CommitTransactionAsync(string message, bool restart) (IL=0x001B, Native=0x00007FFF5A4C0D50+0x89)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(Duplicati.Library.Main.Operation.Common.BackendHandler.FileEntryItem item, Duplicati.Library.Interface.IBackend backend, System.Threading.CancellationToken cancelToken) (IL≈0x07BA, Native=0x00007FFF5A4D6350+0x127F)
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL≈0x0079, Native=0x00007FFFB10EDDA0+0x172)
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL=epilog, Native=0x00007FFFB10EDD80+0x15)
Local variable message might be "CommitAfterUpload"
Profiling log search for INSERT INTO "Remotevolume" might be like
2021-04-21 11:36:27 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-20210421T153627Z.dlist.zip", "Files", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-21 11:36:27 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-20210421T153627Z.dlist.zip", "Files", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.001
2021-04-21 11:36:27 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-bc6dbe56c1b2147c2a2a54224c593dacb.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-21 11:36:27 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-bc6dbe56c1b2147c2a2a54224c593dacb.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2021-04-21 11:36:27 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b93e7e1a38886463ea61c523756a0b798.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-21 11:36:27 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b93e7e1a38886463ea61c523756a0b798.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
Look at DB.sqlite and DB.sqlite-journal with something that can search for .zip
Finding none is probably fine because commit hasn't run. Info may be in memory.
Rollback is even easier if change goes away with memory, instead of undoing it.
This time I have a file, duplicati-b93e7e1a38886463ea61c523756a0b798.dblock.zip
I think last time I didn't see one yet. Maybe BackendAsyncWorker still working?
Terminate this Duplicati. I did it from the debugger. Probably other ways work.
Notice that the DB.sqlite and DB.sqlite-journal are unchanged (no process now).
Start server.
Run backup.
Get red error popup:
Found 1 remote files that are not recorded in local storage, please run repair
Profiling log shows:
2021-04-21 12:19:25 -04 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-b93e7e1a38886463ea61c523756a0b798.dblock.zip
2021-04-21 12:19:25 -04 - [Error-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraRemoteFiles]: Found 1 remote files that are not recorded in local storage, please run repair
DB.sqlite-journal file is gone, maybe because of rollback done at backup start.
I didn't try this time, but I suspect first Restore screen would also rollback.
Sorry, now when I look at your original description more closely, I think I've been focusing on the wrong invocation of CommitTransactionAsync
. Instead of this one,
it looks from the log entries that it is actually this one being interrupted:
In your interrupted database, it shows that the problematic file duplicati-ie0e9f8d244554866a36f83a6e23f16ce.dindex.zip
has a State
of Uploaded
.
Remotevolume table
ID OperationID Name Type Size Hash State
131 24 duplicati-i47c88e3b24ce439e82574732348564f6.dindex.zip Index 28396 OJ43U5UcLX3ORrclS/A8dPLSf5c3LD5s15qih4kIUaE= Verified
132 25 duplicati-20210419T225342Z.dlist.zip Files -1 NULL Uploading
133 25 duplicati-b4e6d3a94c2d84d4fb60eb0989cc4da26.dblock.zip Blocks 2596968 /4YY7VBnMRtK/uScmV8CmyzNwFVr7PR+/4GoPX3zOSs= Uploaded
134 25 duplicati-ie0e9f8d244554866a36f83a6e23f16ce.dindex.zip Index 28362 BoRLmH0Mm010FL0nkia2DLb3QQRePXIDXkEDqj1VGKk= Uploaded
To me, that seems to indicate that the update statement on line 441 below was committed. However, that's confusing to me since it seems that the transaction used by m_database
isn't committed until line 455, which is the commit that was supposedly killed
Someone needs to study this more closely, but is it possible that the workers in BackendUploader.Run share a reference to m_database, and hence any one worker might call CommitTransactionAsync
?
I think there are some run-to-run differences that make things more confusing. There may also be concurrent threads. FWIW here's a look at my initial debug run which, as mentioned, landed differently in terms of actually getting file out:
Run 1 (not reported):
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.BackupHandler-VerifyConsistency]: VerifyConsistency took 0:00:00:00.001
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: Starting - UpdateChangeStatistics
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1619010387 AND "ID" != 1 ORDER BY "Timestamp" DESC
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1619010387 AND "ID" != 1 ORDER BY "Timestamp" DESC took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1)
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1) took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1)
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1) took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash"
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash" took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash"
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash" took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-891739034C054D4C98B6C451EA8DF580" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = -1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID"
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-891739034C054D4C98B6C451EA8DF580" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = -1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID" took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-435B0D5D0877FC46915580D26E6167ED" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = 1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID"
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-435B0D5D0877FC46915580D26E6167ED" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = 1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID" took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" != -100 AND "File"."BlocksetID" != -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "TmpFileList-891739034C054D4C98B6C451EA8DF580")
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" != -100 AND "File"."BlocksetID" != -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "TmpFileList-891739034C054D4C98B6C451EA8DF580") took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-891739034C054D4C98B6C451EA8DF580" WHERE "TmpFileList-891739034C054D4C98B6C451EA8DF580"."Path" NOT IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-891739034C054D4C98B6C451EA8DF580" WHERE "TmpFileList-891739034C054D4C98B6C451EA8DF580"."Path" NOT IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-891739034C054D4C98B6C451EA8DF580" A, "TmpFileList-435B0D5D0877FC46915580D26E6167ED" B WHERE "A"."Path" = "B"."Path" AND ("A"."Filehash" != "B"."Filehash" OR "A"."Metahash" != "B"."Metahash")
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-891739034C054D4C98B6C451EA8DF580" A, "TmpFileList-435B0D5D0877FC46915580D26E6167ED" B WHERE "A"."Path" = "B"."Path" AND ("A"."Filehash" != "B"."Filehash" OR "A"."Metahash" != "B"."Metahash") took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-891739034C054D4C98B6C451EA8DF580";
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-891739034C054D4C98B6C451EA8DF580"; took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-435B0D5D0877FC46915580D26E6167ED";
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-435B0D5D0877FC46915580D26E6167ED"; took 0:00:00:00.000
2021-04-21 09:09:55 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: UpdateChangeStatistics took 0:00:00:00.008
2021-04-21 09:09:55 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2021-04-21 09:09:55 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b45ec3fcb00bf4848a17f5a5b7c6c5e3a.dblock.zip (684 bytes)
2021-04-21 10:50:54 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitUpdateRemoteVolume
2021-04-21 10:50:54 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitUpdateRemoteVolume took 0:00:00:00.111
2021-04-21 10:50:54 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Uploading a new fileset took 0:01:40:58.553
2021-04-21 11:18:10 -04 - [Information-Duplicati.Server.WebServer.Server-ServerListening]: Server has started and is listening on 127.0.0.1, port 8300
Run 2 (was reported):
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.BackupHandler-VerifyConsistency]: VerifyConsistency took 0:00:00:00.001
2021-04-21 11:36:28 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b93e7e1a38886463ea61c523756a0b798.dblock.zip (684 bytes)
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: Starting - UpdateChangeStatistics
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1619019387 AND "ID" != 1 ORDER BY "Timestamp" DESC
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1619019387 AND "ID" != 1 ORDER BY "Timestamp" DESC took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1)
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1) took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1)
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1) took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash"
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash" took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash"
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -200 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash" took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-4B16BE5B4532E5409B5024D323D91C99" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = -1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID"
2021-04-21 11:36:28 -04 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 684 bytes in 00:00:00.0009764, 684.11 KB/s
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-4B16BE5B4532E5409B5024D323D91C99" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = -1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID" took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-348B98F4811B5F4988B8B5E1480C2645" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = 1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID"
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-348B98F4811B5F4988B8B5E1480C2645" AS SELECT "File"."Path" AS "Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = 1 AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID" took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" != -100 AND "File"."BlocksetID" != -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "TmpFileList-4B16BE5B4532E5409B5024D323D91C99")
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" != -100 AND "File"."BlocksetID" != -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "TmpFileList-4B16BE5B4532E5409B5024D323D91C99") took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-4B16BE5B4532E5409B5024D323D91C99" WHERE "TmpFileList-4B16BE5B4532E5409B5024D323D91C99"."Path" NOT IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-4B16BE5B4532E5409B5024D323D91C99" WHERE "TmpFileList-4B16BE5B4532E5409B5024D323D91C99"."Path" NOT IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-4B16BE5B4532E5409B5024D323D91C99" A, "TmpFileList-348B98F4811B5F4988B8B5E1480C2645" B WHERE "A"."Path" = "B"."Path" AND ("A"."Filehash" != "B"."Filehash" OR "A"."Metahash" != "B"."Metahash")
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-4B16BE5B4532E5409B5024D323D91C99" A, "TmpFileList-348B98F4811B5F4988B8B5E1480C2645" B WHERE "A"."Path" = "B"."Path" AND ("A"."Filehash" != "B"."Filehash" OR "A"."Metahash" != "B"."Metahash") took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-4B16BE5B4532E5409B5024D323D91C99";
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-4B16BE5B4532E5409B5024D323D91C99"; took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-348B98F4811B5F4988B8B5E1480C2645";
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-348B98F4811B5F4988B8B5E1480C2645"; took 0:00:00:00.000
2021-04-21 11:36:28 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: UpdateChangeStatistics took 0:00:00:00.002
2021-04-21 11:36:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2021-04-21 11:36:28 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b93e7e1a38886463ea61c523756a0b798.dblock.zip (684 bytes)
2021-04-21 12:18:04 -04 - [Information-Duplicati.Server.WebServer.Server-ServerListening]: Server has started and is listening on 127.0.0.1, port 8300
The logs (or string passed to CommitTransactionAsync) show a mix of CommitAfterUpload and CommitUpdateRemoteVolume, and each of those two have two spots in the code that could do them. Examining context might help, but it's still rather messy.
At least now it has steps to reproduce in a more controlled fashion. I don't know if there's a design behind where commits go... Whether ot not there's one, I'm still thinking (and showing) that an INSERT without a commit from somewhere is not in the DB. There might be piggybacked flushes-to-database (commit) happening in some thread that is helping protect the earlier INSERT.
I have not tried a multi-dblock debug run to see if I can find a commit that pushed earlier Remoteoperation table rows into DB.
Someone needs to study this more closely ...
I have no idea about the specific question, but am still hoping someone can study concurrency and its impact on database uses.
Are you able to reproduce these issues when limiting Duplicati to one upload at a time?
Yes, but it still depends on how things land. Here, first try didn't put out a file, second try did. Neither one got info in the DB.
Profiling log end:
2021-04-22 13:14:41 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: UpdateChangeStatistics took 0:00:00:00.006
2021-04-22 13:14:41 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2021-04-22 13:14:41 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b2fd6517a6c474ddfa5f209ca1c9df4b8.dblock.zip (684 bytes)
No file actually at destination yet.
No .zip references in DB or journal (viewed in hex editor)
message parameter is CommitUpdateRemoteVolume
Stack:
> Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Common.DatabaseCommon.CommitTransactionAsync(string message, bool restart) (IL=0x001B, Native=0x00007FFF5A56E980+0x89)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Backup.UploadRealFilelist.<>c__DisplayClass1_0.<Run>b__0(<>f__AnonymousType6`1<CoCoL.IWriteChannel<Duplicati.Library.Main.Operation.Backup.IUploadRequest>> self) (IL≈0x0435, Native=0x00007FFF5A5666A0+0xA64)
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL≈0x0079, Native=0x00007FFFB10EDDA0+0x172)
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL=epilog, Native=0x00007FFFB10EDD80+0x15)
Thread:
0x00010BE0 0x0000003A Thread Pool Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Common.DatabaseCommon.CommitTransactionAsync(string message, bool restart) (IL=0x001B, Native=0x00007FFF5A56E980+0x89) Normal 0000000000001111 0x00000000 Duplicati.GUI.TrayIcon.exe (id = 54624) [1] Duplicati.GUI.TrayIcon.exe Background, ThreadPool
Look for other recognizable relevant threads and stacks.
0x00004784 0x00000008 Worker Thread WorkerThread<IRunnerData> mscorlib.dll!System.Threading.ManualResetEventSlim.Wait(int millisecondsTimeout, System.Threading.CancellationToken cancellationToken) (IL=???, Native=0x00007FFFB1160A00+0x364) Normal 0000000000001111 0x00000000 Duplicati.GUI.TrayIcon.exe (id = 54624) [1] Duplicati.GUI.TrayIcon.exe Background, WaitSleepJoin, UnsafePoint
has
> mscorlib.dll!System.Threading.ManualResetEventSlim.Wait(int millisecondsTimeout, System.Threading.CancellationToken cancellationToken) (IL=???, Native=0x00007FFFB1160A00+0x364)
mscorlib.dll!System.Threading.Tasks.Task.SpinThenBlockingWait(int millisecondsTimeout, System.Threading.CancellationToken cancellationToken) (IL≈0x002D, Native=0x00007FFFB115B400+0xBB)
mscorlib.dll!System.Threading.Tasks.Task.InternalWait(int millisecondsTimeout, System.Threading.CancellationToken cancellationToken) (IL≈0x007F, Native=0x00007FFFB1AA0020+0x1B1)
mscorlib.dll!System.Threading.Tasks.Task.Wait(int millisecondsTimeout, System.Threading.CancellationToken cancellationToken) (IL≈0x0019, Native=0x00007FFFB115AB10+0x39)
CoCoL.dll!CoCoL.ChannelExtensions.WaitForTask(System.Threading.Tasks.Task task) (IL≈0x0060, Native=0x00007FFF59762A90+0x29)
CoCoL.dll!CoCoL.ChannelExtensions.WaitForTaskOrThrow(System.Threading.Tasks.Task task) (IL≈0x000E, Native=0x00007FFF59762970+0x16)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.BackupHandler.Run(string[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) (IL≈0x0009, Native=0x00007FFF5A497490+0x54)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(Duplicati.Library.Main.BackupResults result) (IL=0x0050, Native=0x00007FFF5A4959A0+0xE8)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Controller.RunAction<Duplicati.Library.Main.BackupResults>(Duplicati.Library.Main.BackupResults result, ref string[] paths, ref Duplicati.Library.Utility.IFilter filter, System.Action<Duplicati.Library.Main.BackupResults> method) (IL=0x0124, Native=0x00007FFF5A48B5B0+0x439)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Controller.Backup(string[] inputsources, Duplicati.Library.Utility.IFilter filter) (IL≈0x008C, Native=0x00007FFF5A4893D0+0x1FA)
Duplicati.Server.exe!Duplicati.Server.Runner.Run(Duplicati.Server.Runner.IRunnerData data, bool fromQueue) (IL≈0x0349, Native=0x00007FFF5A486310+0x8AD)
0x0000C2C4 0x0000001B Worker Thread Backend Async Worker mscorlib.dll!System.Threading.WaitHandle.InternalWaitOne(System.Runtime.InteropServices.SafeHandle waitableSafeHandle, long millisecondsTimeout, bool hasThreadAffinity, bool exitContext) (IL≈0x0014, Native=0x00007FFFB1129CB0+0x1C) Normal 0000000000001111 0x00000000 Duplicati.GUI.TrayIcon.exe (id = 54624) [1] Duplicati.GUI.TrayIcon.exe Background, WaitSleepJoin, UnsafePoint
has
> mscorlib.dll!System.Threading.WaitHandle.InternalWaitOne(System.Runtime.InteropServices.SafeHandle waitableSafeHandle, long millisecondsTimeout, bool hasThreadAffinity, bool exitContext) (IL≈0x0014, Native=0x00007FFFB1129CB0+0x1C)
mscorlib.dll!System.Threading.WaitHandle.WaitOne(int millisecondsTimeout, bool exitContext) (IL≈0x0000, Native=0x00007FFFB1129C70+0x2F)
Duplicati.Library.Utility.dll!Duplicati.Library.Utility.BlockingQueue<Duplicati.Library.Main.BackendManager.FileEntryItem>.Dequeue() (IL≈0x0065, Native=0x00007FFF5A4A91D0+0xBC)
Duplicati.Library.Main.dll!Duplicati.Library.Main.BackendManager.ThreadRun() (IL≈0x0007, Native=0x00007FFF5A4A7D00+0x5B)
0x000069D8 0x00000041 Thread Pool mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start<Duplicati.Library.Backend.File.<PutAsync>d__21>(ref Duplicati.Library.Backend.File.<PutAsync>d__21 stateMachine) (IL≈0x002C, Native=0x00007FFF5A56E890+0x80) Normal 0000000000001111 0x00000000 Duplicati.GUI.TrayIcon.exe (id = 54624) [1] Duplicati.GUI.TrayIcon.exe Background, UnsafePoint, ThreadPool
has
> mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start<Duplicati.Library.Backend.File.<PutAsync>d__21>(ref Duplicati.Library.Backend.File.<PutAsync>d__21 stateMachine) (IL≈0x002C, Native=0x00007FFF5A56E890+0x80)
Duplicati.Library.Backend.File.dll!Duplicati.Library.Backend.File.PutAsync(string remotename, System.IO.Stream stream, System.Threading.CancellationToken cancelToken) (IL≈0x0034, Native=0x00007FFF5A56E800+0x60)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(Duplicati.Library.Main.Operation.Common.BackendHandler.FileEntryItem item, Duplicati.Library.Interface.IBackend backend, System.Threading.CancellationToken cancelToken) (IL≈0x03A4, Native=0x00007FFF5A5678D0+0x8B5)
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL≈0x0079, Native=0x00007FFFB10EDDA0+0x172)
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL=epilog, Native=0x00007FFFB10EDD80+0x15)
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() (IL≈0x0024, Native=0x00007FFFB115CFA0+0x6F)
Terminate.
Because there is no file at destination, I didn't expect a complaint at restart and backup, and there wasn't.
Try again from scratch.
Profiling log end:
2021-04-22 13:36:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.BackupHandler-VerifyConsistency]: VerifyConsistency took 0:00:00:00.000
2021-04-22 13:36:21 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b6af5260e61ff464b936ec28340fb0d6f.dblock.zip (684 bytes)
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UpdateChangeStatistics]: Starting - UpdateChangeStatistics
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1619112980 AND "ID" != 1 ORDER BY "Timestamp" DESC
2021-04-22 13:36:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1619112980 AND "ID" != 1 ORDER BY "Timestamp" DESC took 0:00:00:00.000
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1)
2021-04-22 13:36:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1) took 0:00:00:00.000
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1)
2021-04-22 13:36:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1) took 0:00:00:00.000
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-22 13:36:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -100 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1)
2021-04-22 13:36:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = -1 AND "File"."BlocksetID" = -200 AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = 1) took 0:00:00:00.000
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = -1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = 1 AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = -100 ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash"
Profiling log on duplicati-b6af5260e61ff464b936ec28340fb0d6f.dblock.zip:
2021-04-22 13:36:21 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b6af5260e61ff464b936ec28340fb0d6f.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2021-04-22 13:36:21 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b6af5260e61ff464b936ec28340fb0d6f.dblock.zip", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2021-04-22 13:36:21 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b6af5260e61ff464b936ec28340fb0d6f.dblock.zip (684 bytes)
2021-04-22 13:36:21 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b6af5260e61ff464b936ec28340fb0d6f.dblock.zip (684 bytes)
Destination file duplicati-b6af5260e61ff464b936ec28340fb0d6f.dblock.zip
No .zip references in DB or journal
message parameter is CommitAfterUpload
Stack:
> Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Common.DatabaseCommon.CommitTransactionAsync(string message, bool restart) (IL=0x001B, Native=0x00007FFF5A55E340+0x89)
Duplicati.Library.Main.dll!Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(Duplicati.Library.Main.Operation.Common.BackendHandler.FileEntryItem item, Duplicati.Library.Interface.IBackend backend, System.Threading.CancellationToken cancelToken) (IL≈0x07BA, Native=0x00007FFF5A556C60+0x127F)
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL≈0x0079, Native=0x00007FFFB10EDDA0+0x172)
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) (IL=epilog, Native=0x00007FFFB10EDD80+0x15)
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() (IL≈0x0024, Native=0x00007FFFB115CFA0+0x6F)
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) (IL≈0x001C, Native=0x00007FFFB11624C0+0x63)
Terminate.
With a file out, and nothing in DB, forecast is for a fail after terminate and restart, and there is.
This issue has been mentioned on Duplicati. There might be relevant details there:
https://forum.duplicati.com/t/ftp-delete-not-atomic-verified-transactional/12936/5
This issue has been mentioned on Duplicati. There might be relevant details there:
https://forum.duplicati.com/t/ftp-delete-not-atomic-verified-transactional/12936/41
This issue has been mentioned on Duplicati. There might be relevant details there:
Environment info
Description
After interrupted backup, next backup failed with remote files that are not recorded in local storage. Did DB forget upload? This scripted test runs a backup repeatedly, with a second thread set for a 5 percent chance of killing process every second. An earlier test using a lower chance of kill failed a different way. The tests simulate hard stops, e.g. shutting down a system.
Steps to reproduce
EDIT: Steps posted. New kill test still gets it. Repair fixes, but sadly Database is locked when performing auto-cleanup #4631.
Screenshots
Debug log
Console output:
Profiling log references to the now-not-in-DB file:
Profiling log details:
Database comparison between databases after last two runs: