Open Tom-H-L opened 2 years ago
Both Borg and Vorta versions are rather old. Does it happen with newer versions too?
I could imagine that the following lines are the cause of this behaviour:
Is it possible that your keyring is locked and couldn't be unlocked when vorta requested the password?
I had another look at the code. Maybe it is a different issue but one is for sure: The logs you provided are incomplete.
Is it possible that your keyring is locked and couldn't be unlocked when vorta requested the password?
The keyring is currently unlocked. I double checked twice:
I had another look at the code. Maybe it is a different issue but one is for sure: The logs you provided are incomplete.
The logs are everything that is shown as soon as I press the "Unmount" button. Which other parts of the log file do you find interesting so I can fetch the relevant parts?
Can it be related to the password in the keyring itself, concerning length, special characters, etc.?
The logs are everything that is shown as soon as I press the "Unmount" button. Which other parts of the log file do you find interesting so I can fetch the relevant parts?
Following the logs you posted, either the GUI must have shown a status message next to the Cancel button: Please unlock your system password manager or disable it under Misc
or Mount point not active.
OR the logs show Add job for site <n>
.
Mount point not active.
OR the logs showAdd job for site <n>
.
Yes, the GUI says "Mount point not active"!
I just did the following:
I clicked on "Refresh". The log output after pressing the button is as follows, and there a "Add job for site 2" is found.
2022-11-03 05:34:04,639 - vorta.keyring.abc - DEBUG - Only available on macOS
2022-11-03 05:34:04,640 - asyncio - DEBUG - Using selector: EpollSelector
2022-11-03 05:34:04,641 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
2022-11-03 05:34:04,643 - asyncio - DEBUG - Using selector: EpollSelector
2022-11-03 05:34:04,645 - root - DEBUG - Found 1 passwords matching repo URL.
2022-11-03 05:34:04,677 - vorta.borg.jobs_manager - DEBUG - Add job for site 2
2022-11-03 05:34:04,677 - vorta.borg.jobs_manager - DEBUG - Start job on site: 2
2022-11-03 05:34:04,700 - vorta.borg.borg_job - INFO - Running command /usr/bin/borg list --info --log-json --json /home/user/Documents/10 Backups/Borg
2022-11-03 05:34:05,284 - vorta.borg.jobs_manager - DEBUG - Finish job for site: 2
2022-11-03 05:34:05,285 - vorta.borg.jobs_manager - DEBUG - No more jobs for site: 2
Right after under the Archive Button it says now "Refreshed archives".
Now I clicked that button menu and selected "Unmount". Below the button the "Refreshed archives" changed to "Mount point not active" again and the log file got this:
2022-11-03 05:42:51,765 - vorta.keyring.abc - DEBUG - Only available on macOS
2022-11-03 05:42:51,767 - asyncio - DEBUG - Using selector: EpollSelector
2022-11-03 05:42:51,768 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
2022-11-03 05:42:51,769 - asyncio - DEBUG - Using selector: EpollSelector
2022-11-03 05:42:51,771 - root - DEBUG - Found 1 passwords matching repo URL.
When trying to unmount, what does the following command output? Be sure to replace the path by the actual path of the mountpoint.
python3 -c "import psutil; print([p for p in psutil.disk_partitions(all=True) if p.mountpoint == '/home/username/Documents/Temp (deleteme)/tempmount' or 'borg' in p.device or 'fuse' in p.fstype])"
When trying to unmount, what does the following command output? Be sure to replace the path by the actual path of the mountpoint.
python3 -c "import psutil; print([p for p in psutil.disk_partitions(all=True) if p.mountpoint == '/home/username/Documents/Temp (deleteme)/tempmount' or 'borg' in p.device or 'fuse' in p.fstype])"
After reboot I could reproduce the problem. Without having done anything else before, after reboot I did the following:
2022-11-03 15:01:54,144 - vorta.scheduler - DEBUG - Scheduler for profile 1 is disabled.
2022-11-03 15:01:54,146 - vorta.scheduler - INFO - Setting timer for profile 2
2022-11-03 15:01:54,147 - vorta.scheduler - DEBUG - Scheduling next run for 2022-11-03 16:00:00
+2022-11-03 15:15:07,146 - vorta.keyring.abc - DEBUG - Only available on macOS
+2022-11-03 15:15:07,300 - asyncio - DEBUG - Using selector: EpollSelector
+2022-11-03 15:15:07,302 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
+2022-11-03 15:15:07,304 - asyncio - DEBUG - Using selector: EpollSelector
+2022-11-03 15:15:07,306 - root - DEBUG - Found 1 passwords matching repo URL.
+2022-11-03 15:15:14,126 - vorta.borg.jobs_manager - DEBUG - Add job for site 2
+2022-11-03 15:15:14,129 - vorta.borg.jobs_manager - DEBUG - Start job on site: 2
+2022-11-03 15:15:14,136 - vorta.borg.borg_job - INFO - Running command /usr/bin/borg --log-json mount /home/XXXX/XXXX/XXXX Backups/Borg::XXXX-2022-11-02-185959 /home/XXXX/XXXX/Temp (deleteme)/tempmount
+2022-11-03 15:15:19,108 - vorta.borg.jobs_manager - DEBUG - Finish job for site: 2
+2022-11-03 15:15:19,112 - vorta.borg.jobs_manager - DEBUG - No more jobs for site: 2
Below the button it says "Mounted successfully". I then tried to dismount it right again. Result: Below the button it says "Mount point not active", the volume remains mounted and the log added:
+2022-11-03 15:31:18,746 - vorta.keyring.abc - DEBUG - Only available on macOS
+2022-11-03 15:31:18,748 - asyncio - DEBUG - Using selector: EpollSelector
+2022-11-03 15:31:18,749 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
+2022-11-03 15:31:18,751 - asyncio - DEBUG - Using selector: EpollSelector
+2022-11-03 15:31:18,752 - root - DEBUG - Found 1 passwords matching repo URL.
I then ran the python3 command as per your instruction. The output is as follows, with the path, user id, group id being obfuscated by me with "XXXX":
[sdiskpart(device='fusectl', mountpoint='/sys/fs/fuse/connections', fstype='fusectl', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='gvfsd-fuse', mountpoint='/run/user/XXXX/gvfs', fstype='fuse.gvfsd-fuse', opts='rw,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX', maxfile=1024, maxpath=4096), sdiskpart(device='portal', mountpoint='/run/user/XXXX/doc', fstype='fuse.portal', opts='rw,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX', maxfile=None, maxpath=4096), sdiskpart(device='borgfs', mountpoint='/home/XXXX/XXXX/Temp (deleteme)/tempmount', fstype='fuse', opts='ro,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX,default_permissions', maxfile=255, maxpath=4096)]
sdiskpart(device='borgfs', mountpoint='/home/XXXX/XXXX/Temp (deleteme)/tempmount', fstype='fuse', opts='ro,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX,default_permissions', maxfile=255, maxpath=4096)
This is the mountpoint in question and it is correctly found as an active mount. Does the GUI show the correct mount path next to the archive name you mounted?
This is the mountpoint in question and it is correctly found as an active mount.
Yes, the mountpoint gets mounted correctly. Here is what the GNU/Linux tool "mount" reports, which lists all active mounts:
borgfs on /home/XXXX/XXXX/Temp/tempmount type fuse (ro,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX,default_permissions)
Does the GUI show the correct mount path next to the archive name you mounted?
Yes it does show the mountpoint correctly in the GUI. Everything works fine with that mount, I can access it, I can extract files from it, etc. Only: I cannot unmount it anymore, I have to reboot to get rid of the mount (I might be able to unmount it via the terminal, but as a Vorta and borg noob I am reluctant to fiddle around with the borgfs/fuse volume without concrete instructions since the archive contains real production data and I don't want to mess it up)
I rebooted and tried out the following:
No cure.
but as a Vorta and borg noob I am reluctant to fiddle around with the borgfs/fuse volume without concrete instructions since the archive contains real production data and I don't want to mess it up
borg umount <path>
or unmount <path>
should do the job. Those two commands are completely safe.
Yes it does show the mountpoint correctly in the GUI.
Double clicking on the path in the GUI opens the correct folder?
I created a debug version of vorta for this issue. You can install it with the following commands. This won't touch borg, affect your repositories or your configuration of vorta.
sudo apt install python3-pip python3-dev build-essentials git && \
pip3 install "vorta @ https://github.com/real-yfprojects/vorta/archive/refs/heads/debug%231461.zip"
After that try reproducing the issue and have a look into the logs for a line starting with Selected archive
.
borg unmount <path>
orunmount <path>
should do the job. Those two commands are completely safe.
I tried borg umount <path>
(beware for GNU/Linux readers: umount, not unmount) and the volume unmounted successfully, thanks for that information!
What else happened:
Double clicking on the path in the GUI opens the correct folder?
Yes!
I created a debug version of vorta for this issue. You can install it with the following commands. This won't touch borg, affect your repositories or your configuration of vorta.
sudo apt install python3-pip python3-dev build-essentials git && \ pip3 install "git+https://github.com/real-yfprojects/vorta@f64c045#egg=vorta"
After that try reproducing the issue and have a look into the logs for a line starting with
Selected archive
.
That sounds great, but unfortunately due to policy reasons I am not able to use anything else on this production machine than stuff coming from the official Debian repositories that Ubuntu uses (only from "main" and "universe"). So I am afraid and sorry that I cannot do that on this machine right now. Please tell me if going this route would be of great significance for finding this bug; in that case I could set up a testing VM with identical setup to try to reproduce the bug there and use the tool you created on it, but this would not be earlier than in a few days. Is there anything else that I could try out and report right now on this machine with the tools I got?
Is there anything else that I could try out and report right now on this machine with the tools I got?
I don't know of anything. To me this is the definition of an impossible bug.
Updated to 0.8.7, no cure
I've been experiencing the same up to the current Vorta 0.8.12 (Flatpak), I'm not sure what the conditions that recreate the issue are.
There may be a loose connection to putting the system into sleep mode, waking it and trying to unmount, in my case, but I can't confirm yet
Update: it seems the things aren't related, even trying to unmount immediately after mounting doesn't work
Thank you @quazar-omega for reporting. Can you run the debug version from https://github.com/borgbase/vorta/issues/1461#issuecomment-1302409318, reproduce the issue and post the logs here?
Sure! I'll try that soon
Sorry if my soon isn't being very soon (∩´ᴖ`∩)
Can I ask you if these instructions still apply? I'm using Fedora Kinoite so I'm guessing I could try to run that in a Debian Distrobox container?
Can I ask you if these instructions still apply?
They do still apply for Debian/Ubuntu. For Fedora the following should work:
sudo dnf install python3-pip git
python -m pip install --user "vorta @ https://github.com/real-yfprojects/vorta/archive/refs/heads/debug%231461.zip"
Ok, I've run it in a Debian 12 container
Mounting doesn't seem to work, it outputs this:
2023-09-10 17:44:30,468 - vorta.borg.borg_job - INFO - Running command /usr/bin/borg --log-json mount /run/media/amusing-dove/Backup/backup-fedora-workstation::ddd174-f5765038 /home/amusing-dove/.local/share/box-homes/Box-Debian/mnt
2023-09-10 17:44:34,033 - vorta.borg.borg_job - WARNING - fuse: failed to exec fusermount3: No such file or directory
2023-09-10 17:44:34,193 - vorta.borg.jobs_manager - DEBUG - Finish job for site: 1
2023-09-10 17:44:34,195 - vorta.borg.jobs_manager - DEBUG - No more jobs for site: 1
Maybe I'll try to install on my host OS
Maybe I'll try to install on my host OS
That will probably save time.
Mounting doesn't seem to work, it outputs this:
Most likely the libfuse
package is missing.
Alright, I've got something, I managed to run it now by installing in the same way on my host.
(Realized I just leaked my password like a dumbass... well, whatever)
Sorry about that. While they are unsafe now, I can still completely redact them from github if you want.
I will look into you results later when I have more time. Thank you for running the debug build.
It's fine, there's no need, it's all local anyway. I can just change it or make a new repo, once it's on the internet it's already compromised after all :)
About the message, just by giving it a mindless skim it doesn't look like there's any specific error reported, is that actually the important part? It is what was output when I clicked the button each time I tried, so I'm wondering if there's more info to give, I'll stay available to help troubleshoot
About the message, just by giving it a mindless skim it doesn't look like there's any specific error reported, is that actually the important part?
No, it is printing out extra information (that's also why you're passwords appeared there, usually they aren't logged). Hopefully this information will allow me to get closer to the root problem of this issue. However I can't garantee anything.
Got it, thanks for looking into this!
Ok, I got closer to the problem (at least in your case @quazar-omega). While you seem to have mounted the archive into /home/amusing-dove/.mnt/old-backup
, psutil.disk_partitions(all=True)
will return the mountpoint /var/home/amusing-dove/.mnt/old-backup
. This explains Vorta's behaviour. However I don't know where the /var
comes from.
Can you reproduce the bug and check the output of
cat /proc/mounts
which should list all mounts. This will allow determining whether there is an issue with diskutils
. If that also outputs /var/home/...
, can you check that the archive isn't mounted in /home/amusing-dove/...
but only in /var/home/...
?
Looked through the output, I see at the end
borgfs /var/home/amusing-dove/.mnt/old-backup fuse ro,nosuid,nodev,relatime,user_id=1000,group_id=1000,default_permissions 0 0
I assume that's what I'm looking for? It is listed with /var
, which I can further confirm with the fact that /home
is a symbolic link to /var/home
:
/ $ ls -l
total 48
lrwxrwxrwx. 4 root root 7 May 1 19:02 bin -> usr/bin
drwxr-xr-x. 7 root root 4096 Sep 10 18:04 boot
drwxr-xr-x. 21 root root 4640 Sep 16 13:22 dev
drwxr-xr-x. 1 root root 4506 Sep 11 12:46 etc
lrwxrwxrwx. 4 root root 8 May 1 19:02 home -> var/home
lrwxrwxrwx. 7 root root 7 May 1 19:02 lib -> usr/lib
lrwxrwxrwx. 7 root root 9 May 1 19:02 lib64 -> usr/lib64
lrwxrwxrwx. 4 root root 9 May 1 19:02 media -> run/media
lrwxrwxrwx. 4 root root 7 May 1 19:02 mnt -> var/mnt
lrwxrwxrwx. 4 root root 7 May 1 19:02 opt -> var/opt
lrwxrwxrwx. 4 root root 14 May 1 19:02 ostree -> sysroot/ostree
dr-xr-xr-x. 466 root root 0 Sep 16 2023 proc
lrwxrwxrwx. 4 root root 12 May 1 19:02 root -> var/roothome
drwxr-xr-x. 51 root root 1300 Sep 16 13:20 run
lrwxrwxrwx. 4 root root 8 May 1 19:02 sbin -> usr/sbin
lrwxrwxrwx. 4 root root 7 May 1 19:02 srv -> var/srv
dr-xr-xr-x. 13 root root 0 Sep 16 13:18 sys
drwxr-xr-x. 1 root root 74 May 1 19:02 sysroot
drwxrwxrwt. 19 root root 460 Sep 16 13:23 tmp
drwxr-xr-x. 1 root root 106 Jan 1 1970 usr
drwxr-xr-x. 1 root root 222 Sep 11 12:46 var
So the archive is mounted on "both", i.e. just the actual path prepended by /var
I see. So in your case the home directory is linked to /var/home
. That's why /proc/mounts
reports the mount there. Consequently this issue can be fixed by implementing support for symlinks in the mount path.
@Tom-H-L Does the mount path you reported this issue for contain any hard or symbolic links?
Description
Reproduction
OS
Ubuntu 22.04.1 LTS Desktop
Version of Vorta
0.8.3
What did you install Vorta with?
Distribution package
Version of Borg
1.2.0
Logs