borgbase / vorta

Desktop Backup Client for Borg Backup
https://vorta.borgbase.com
GNU General Public License v3.0
2.01k stars 131 forks source link

Can't unmount #1461

Open Tom-H-L opened 2 years ago

Tom-H-L commented 2 years ago

Description

  1. I cannot unmount a mounted archive.
  2. I press the Archive > Selected Archive > Unmount button, but nothing happens, neither the archive unmounts (I can still access it on the filesystem), nor does the button change status to "Mount", it still says "Unmount".
  3. Every time I click on the button the same 5 lines of log are recorded to the log file (see below).
  4. The path of the archive includes space and rounded brackets "()", filesystem ext4: "/home/username/Documents/Temp (deleteme)/tempmount", maybe this is relevant.

Reproduction

OS

Ubuntu 22.04.1 LTS Desktop

Version of Vorta

0.8.3

What did you install Vorta with?

Distribution package

Version of Borg

1.2.0

Logs

2022-11-02 20:29:58,970 - vorta.keyring.abc - DEBUG - Only available on macOS
2022-11-02 20:29:58,976 - asyncio - DEBUG - Using selector: EpollSelector
2022-11-02 20:29:58,978 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
2022-11-02 20:29:58,980 - asyncio - DEBUG - Using selector: EpollSelector
2022-11-02 20:29:58,983 - root - DEBUG - Found 1 passwords matching repo URL.
m3nu commented 2 years ago

Both Borg and Vorta versions are rather old. Does it happen with newer versions too?

real-yfprojects commented 2 years ago

I could imagine that the following lines are the cause of this behaviour:

https://github.com/borgbase/vorta/blob/055338af2cce1df74ee8d4750f131f86e5c6c7e8/src/vorta/keyring/secretstorage.py#L59-L64

Is it possible that your keyring is locked and couldn't be unlocked when vorta requested the password?

real-yfprojects commented 2 years ago

I had another look at the code. Maybe it is a different issue but one is for sure: The logs you provided are incomplete.

Tom-H-L commented 2 years ago

Is it possible that your keyring is locked and couldn't be unlocked when vorta requested the password?

The keyring is currently unlocked. I double checked twice:

  1. I locked the keyring and then pressed "Unmount" in Vorta: The keyring unlocking dialogue pops up and I unlock it. Nevertheless Vorta still does not unmount
  2. I locked the keyring and unlocked it manually. Then went to Vorta and pressed "unmount". No remedy, either.
Tom-H-L commented 2 years ago

I had another look at the code. Maybe it is a different issue but one is for sure: The logs you provided are incomplete.

The logs are everything that is shown as soon as I press the "Unmount" button. Which other parts of the log file do you find interesting so I can fetch the relevant parts?

Tom-H-L commented 2 years ago

Can it be related to the password in the keyring itself, concerning length, special characters, etc.?

real-yfprojects commented 2 years ago

The logs are everything that is shown as soon as I press the "Unmount" button. Which other parts of the log file do you find interesting so I can fetch the relevant parts?

Following the logs you posted, either the GUI must have shown a status message next to the Cancel button: Please unlock your system password manager or disable it under Misc or Mount point not active. OR the logs show Add job for site <n>.

Tom-H-L commented 2 years ago

Mount point not active. OR the logs show Add job for site <n>.

Yes, the GUI says "Mount point not active"!

I just did the following:

  1. I clicked on "Refresh". The log output after pressing the button is as follows, and there a "Add job for site 2" is found.

    2022-11-03 05:34:04,639 - vorta.keyring.abc - DEBUG - Only available on macOS
    2022-11-03 05:34:04,640 - asyncio - DEBUG - Using selector: EpollSelector
    2022-11-03 05:34:04,641 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
    2022-11-03 05:34:04,643 - asyncio - DEBUG - Using selector: EpollSelector
    2022-11-03 05:34:04,645 - root - DEBUG - Found 1 passwords matching repo URL.
    2022-11-03 05:34:04,677 - vorta.borg.jobs_manager - DEBUG - Add job for site 2
    2022-11-03 05:34:04,677 - vorta.borg.jobs_manager - DEBUG - Start job on site: 2
    2022-11-03 05:34:04,700 - vorta.borg.borg_job - INFO - Running command /usr/bin/borg list --info --log-json --json /home/user/Documents/10 Backups/Borg
    2022-11-03 05:34:05,284 - vorta.borg.jobs_manager - DEBUG - Finish job for site: 2
    2022-11-03 05:34:05,285 - vorta.borg.jobs_manager - DEBUG - No more jobs for site: 2

    Right after under the Archive Button it says now "Refreshed archives".

  2. Now I clicked that button menu and selected "Unmount". Below the button the "Refreshed archives" changed to "Mount point not active" again and the log file got this:

    2022-11-03 05:42:51,765 - vorta.keyring.abc - DEBUG - Only available on macOS
    2022-11-03 05:42:51,767 - asyncio - DEBUG - Using selector: EpollSelector
    2022-11-03 05:42:51,768 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
    2022-11-03 05:42:51,769 - asyncio - DEBUG - Using selector: EpollSelector
    2022-11-03 05:42:51,771 - root - DEBUG - Found 1 passwords matching repo URL.
real-yfprojects commented 2 years ago

When trying to unmount, what does the following command output? Be sure to replace the path by the actual path of the mountpoint.

python3 -c "import psutil; print([p for p in psutil.disk_partitions(all=True) if p.mountpoint == '/home/username/Documents/Temp (deleteme)/tempmount' or 'borg' in p.device or 'fuse' in p.fstype])"
Tom-H-L commented 2 years ago

When trying to unmount, what does the following command output? Be sure to replace the path by the actual path of the mountpoint.

python3 -c "import psutil; print([p for p in psutil.disk_partitions(all=True) if p.mountpoint == '/home/username/Documents/Temp (deleteme)/tempmount' or 'borg' in p.device or 'fuse' in p.fstype])"

After reboot I could reproduce the problem. Without having done anything else before, after reboot I did the following:

  1. Mounted the last Archive file:
    2022-11-03 15:01:54,144 - vorta.scheduler - DEBUG - Scheduler for profile 1 is disabled.
    2022-11-03 15:01:54,146 - vorta.scheduler - INFO - Setting timer for profile 2
    2022-11-03 15:01:54,147 - vorta.scheduler - DEBUG - Scheduling next run for 2022-11-03 16:00:00
    +2022-11-03 15:15:07,146 - vorta.keyring.abc - DEBUG - Only available on macOS
    +2022-11-03 15:15:07,300 - asyncio - DEBUG - Using selector: EpollSelector
    +2022-11-03 15:15:07,302 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
    +2022-11-03 15:15:07,304 - asyncio - DEBUG - Using selector: EpollSelector
    +2022-11-03 15:15:07,306 - root - DEBUG - Found 1 passwords matching repo URL.
    +2022-11-03 15:15:14,126 - vorta.borg.jobs_manager - DEBUG - Add job for site 2
    +2022-11-03 15:15:14,129 - vorta.borg.jobs_manager - DEBUG - Start job on site: 2
    +2022-11-03 15:15:14,136 - vorta.borg.borg_job - INFO - Running command /usr/bin/borg --log-json mount /home/XXXX/XXXX/XXXX Backups/Borg::XXXX-2022-11-02-185959 /home/XXXX/XXXX/Temp (deleteme)/tempmount
    +2022-11-03 15:15:19,108 - vorta.borg.jobs_manager - DEBUG - Finish job for site: 2
    +2022-11-03 15:15:19,112 - vorta.borg.jobs_manager - DEBUG - No more jobs for site: 2

    Below the button it says "Mounted successfully". I then tried to dismount it right again. Result: Below the button it says "Mount point not active", the volume remains mounted and the log added:

    +2022-11-03 15:31:18,746 - vorta.keyring.abc - DEBUG - Only available on macOS
    +2022-11-03 15:31:18,748 - asyncio - DEBUG - Using selector: EpollSelector
    +2022-11-03 15:31:18,749 - vorta.borg.borg_job - DEBUG - Using VortaSecretStorageKeyring keyring to store passwords.
    +2022-11-03 15:31:18,751 - asyncio - DEBUG - Using selector: EpollSelector
    +2022-11-03 15:31:18,752 - root - DEBUG - Found 1 passwords matching repo URL.

    I then ran the python3 command as per your instruction. The output is as follows, with the path, user id, group id being obfuscated by me with "XXXX": [sdiskpart(device='fusectl', mountpoint='/sys/fs/fuse/connections', fstype='fusectl', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='gvfsd-fuse', mountpoint='/run/user/XXXX/gvfs', fstype='fuse.gvfsd-fuse', opts='rw,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX', maxfile=1024, maxpath=4096), sdiskpart(device='portal', mountpoint='/run/user/XXXX/doc', fstype='fuse.portal', opts='rw,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX', maxfile=None, maxpath=4096), sdiskpart(device='borgfs', mountpoint='/home/XXXX/XXXX/Temp (deleteme)/tempmount', fstype='fuse', opts='ro,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX,default_permissions', maxfile=255, maxpath=4096)]

real-yfprojects commented 2 years ago

sdiskpart(device='borgfs', mountpoint='/home/XXXX/XXXX/Temp (deleteme)/tempmount', fstype='fuse', opts='ro,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX,default_permissions', maxfile=255, maxpath=4096)

This is the mountpoint in question and it is correctly found as an active mount. Does the GUI show the correct mount path next to the archive name you mounted?

Tom-H-L commented 2 years ago

This is the mountpoint in question and it is correctly found as an active mount.

Yes, the mountpoint gets mounted correctly. Here is what the GNU/Linux tool "mount" reports, which lists all active mounts: borgfs on /home/XXXX/XXXX/Temp/tempmount type fuse (ro,nosuid,nodev,relatime,user_id=XXXX,group_id=XXXX,default_permissions)

Does the GUI show the correct mount path next to the archive name you mounted?

Yes it does show the mountpoint correctly in the GUI. Everything works fine with that mount, I can access it, I can extract files from it, etc. Only: I cannot unmount it anymore, I have to reboot to get rid of the mount (I might be able to unmount it via the terminal, but as a Vorta and borg noob I am reluctant to fiddle around with the borgfs/fuse volume without concrete instructions since the archive contains real production data and I don't want to mess it up)

I rebooted and tried out the following:

No cure.

real-yfprojects commented 2 years ago

but as a Vorta and borg noob I am reluctant to fiddle around with the borgfs/fuse volume without concrete instructions since the archive contains real production data and I don't want to mess it up

borg umount <path> or unmount <path> should do the job. Those two commands are completely safe.

Yes it does show the mountpoint correctly in the GUI.

Double clicking on the path in the GUI opens the correct folder?

I created a debug version of vorta for this issue. You can install it with the following commands. This won't touch borg, affect your repositories or your configuration of vorta.

sudo apt install python3-pip python3-dev build-essentials git && \
pip3 install "vorta @ https://github.com/real-yfprojects/vorta/archive/refs/heads/debug%231461.zip"

After that try reproducing the issue and have a look into the logs for a line starting with Selected archive.

Tom-H-L commented 2 years ago

borg unmount <path> or unmount <path> should do the job. Those two commands are completely safe.

I tried borg umount <path> (beware for GNU/Linux readers: umount, not unmount) and the volume unmounted successfully, thanks for that information! What else happened:

Double clicking on the path in the GUI opens the correct folder?

Yes!

I created a debug version of vorta for this issue. You can install it with the following commands. This won't touch borg, affect your repositories or your configuration of vorta.

sudo apt install python3-pip python3-dev build-essentials git && \
pip3 install "git+https://github.com/real-yfprojects/vorta@f64c045#egg=vorta"

After that try reproducing the issue and have a look into the logs for a line starting with Selected archive.

That sounds great, but unfortunately due to policy reasons I am not able to use anything else on this production machine than stuff coming from the official Debian repositories that Ubuntu uses (only from "main" and "universe"). So I am afraid and sorry that I cannot do that on this machine right now. Please tell me if going this route would be of great significance for finding this bug; in that case I could set up a testing VM with identical setup to try to reproduce the bug there and use the tool you created on it, but this would not be earlier than in a few days. Is there anything else that I could try out and report right now on this machine with the tools I got?

real-yfprojects commented 2 years ago

Is there anything else that I could try out and report right now on this machine with the tools I got?

I don't know of anything. To me this is the definition of an impossible bug.

Tom-H-L commented 2 years ago

Updated to 0.8.7, no cure

quazar-omega commented 1 year ago

I've been experiencing the same up to the current Vorta 0.8.12 (Flatpak), I'm not sure what the conditions that recreate the issue are.
There may be a loose connection to putting the system into sleep mode, waking it and trying to unmount, in my case, but I can't confirm yet


Update: it seems the things aren't related, even trying to unmount immediately after mounting doesn't work

real-yfprojects commented 1 year ago

Thank you @quazar-omega for reporting. Can you run the debug version from https://github.com/borgbase/vorta/issues/1461#issuecomment-1302409318, reproduce the issue and post the logs here?

quazar-omega commented 1 year ago

Sure! I'll try that soon

quazar-omega commented 1 year ago

Sorry if my soon isn't being very soon (∩´ᴖ`∩)
Can I ask you if these instructions still apply? I'm using Fedora Kinoite so I'm guessing I could try to run that in a Debian Distrobox container?

real-yfprojects commented 1 year ago

Can I ask you if these instructions still apply?

They do still apply for Debian/Ubuntu. For Fedora the following should work:

sudo dnf install python3-pip git 
python -m pip install --user "vorta @ https://github.com/real-yfprojects/vorta/archive/refs/heads/debug%231461.zip"
quazar-omega commented 1 year ago

Ok, I've run it in a Debian 12 container

My process 1. Installed the dependencies, added `pipx` to install vorta cleanly later (and corrected `build-essentials` typo): ```sh sudo apt install python3-pip python3-dev build-essential git pipx ``` 2. Installed the vorta test build: ```sh pipx install "vorta @ https://github.com/real-yfprojects/vorta/archive/refs/heads/debug%231461.zip" pipx ensurepath ``` 3. Running it like that I realized I was missing all the application dependencies so I installed the Debian package for vorta: ```sh sudo apt install vorta ``` 4. Renamed and ran: ```sh mv .local/bin/vorta .local/bin/vorta-testing vorta-testing ```

Mounting doesn't seem to work, it outputs this:

2023-09-10 17:44:30,468 - vorta.borg.borg_job - INFO - Running command /usr/bin/borg --log-json mount /run/media/amusing-dove/Backup/backup-fedora-workstation::ddd174-f5765038 /home/amusing-dove/.local/share/box-homes/Box-Debian/mnt
2023-09-10 17:44:34,033 - vorta.borg.borg_job - WARNING - fuse: failed to exec fusermount3: No such file or directory
2023-09-10 17:44:34,193 - vorta.borg.jobs_manager - DEBUG - Finish job for site: 1
2023-09-10 17:44:34,195 - vorta.borg.jobs_manager - DEBUG - No more jobs for site: 1

Maybe I'll try to install on my host OS

real-yfprojects commented 1 year ago

Maybe I'll try to install on my host OS

That will probably save time.

Mounting doesn't seem to work, it outputs this:

Most likely the libfuse package is missing.

quazar-omega commented 1 year ago

Alright, I've got something, I managed to run it now by installing in the same way on my host.

It (luckily) behaves the same as my original installation so this is my output when I click on "Unmount" ``` Selected archive: ddd174-f5765038 Mountpoint: /home/amusing-dove/.mnt/old-backup Mountpoints: {'ddd174-f5765038': '/home/amusing-dove/.mnt/old-backup'} 2023-09-11 15:08:07,723 - vorta.keyring.abc - DEBUG - Only available on macOS 2023-09-11 15:08:07,725 - vorta.borg.borg_job - DEBUG - Using VortaKWallet5Keyring keyring to store passwords. Partitions [sdiskpart(device='proc', mountpoint='/proc', fstype='proc', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='sysfs', mountpoint='/sys', fstype='sysfs', opts='rw,seclabel,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='devtmpfs', mountpoint='/dev', fstype='devtmpfs', opts='rw,seclabel,nosuid,size=4096k,nr_inodes=2006778,mode=755,inode64', maxfile=255, maxpath=4096), sdiskpart(device='securityfs', mountpoint='/sys/kernel/security', fstype='securityfs', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='tmpfs', mountpoint='/dev/shm', fstype='tmpfs', opts='rw,seclabel,nosuid,nodev,inode64', maxfile=255, maxpath=4096), sdiskpart(device='devpts', mountpoint='/dev/pts', fstype='devpts', opts='rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000', maxfile=255, maxpath=4096), sdiskpart(device='tmpfs', mountpoint='/run', fstype='tmpfs', opts='rw,seclabel,nosuid,nodev,size=3229212k,nr_inodes=819200,mode=755,inode64', maxfile=255, maxpath=4096), sdiskpart(device='cgroup2', mountpoint='/sys/fs/cgroup', fstype='cgroup2', opts='rw,seclabel,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot', maxfile=255, maxpath=4096), sdiskpart(device='pstore', mountpoint='/sys/fs/pstore', fstype='pstore', opts='rw,seclabel,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='efivarfs', mountpoint='/sys/firmware/efi/efivars', fstype='efivarfs', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='bpf', mountpoint='/sys/fs/bpf', fstype='bpf', opts='rw,nosuid,nodev,noexec,relatime,mode=700', maxfile=255, maxpath=4096), sdiskpart(device='ramfs', mountpoint='/run/credentials/systemd-vconsole-setup.service', fstype='ramfs', opts='ro,seclabel,nosuid,nodev,noexec,relatime,mode=700', maxfile=255, maxpath=4096), sdiskpart(device='configfs', mountpoint='/sys/kernel/config', fstype='configfs', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='/dev/mapper/luks-353e522f-c0f3-4167-99fc-90d576a734e8', mountpoint='/sysroot', fstype='btrfs', opts='ro,seclabel,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=258,subvol=/root', maxfile=255, maxpath=4096), sdiskpart(device='/dev/mapper/luks-353e522f-c0f3-4167-99fc-90d576a734e8', mountpoint='/', fstype='btrfs', opts='rw,seclabel,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=258,subvol=/root', maxfile=255, maxpath=4096), sdiskpart(device='/dev/mapper/luks-353e522f-c0f3-4167-99fc-90d576a734e8', mountpoint='/etc', fstype='btrfs', opts='rw,seclabel,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=258,subvol=/root', maxfile=255, maxpath=4096), sdiskpart(device='/dev/mapper/luks-353e522f-c0f3-4167-99fc-90d576a734e8', mountpoint='/usr', fstype='btrfs', opts='ro,seclabel,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=258,subvol=/root', maxfile=255, maxpath=4096), sdiskpart(device='/dev/mapper/luks-353e522f-c0f3-4167-99fc-90d576a734e8', mountpoint='/sysroot/ostree/deploy/fedora/var', fstype='btrfs', opts='rw,seclabel,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=258,subvol=/root', maxfile=255, maxpath=4096), sdiskpart(device='selinuxfs', mountpoint='/sys/fs/selinux', fstype='selinuxfs', opts='rw,nosuid,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='systemd-1', mountpoint='/proc/sys/fs/binfmt_misc', fstype='autofs', opts='rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=23710', maxfile=255, maxpath=4096), sdiskpart(device='binder', mountpoint='/dev/binderfs', fstype='binder', opts='rw,relatime,max=1048576', maxfile=255, maxpath=4096), sdiskpart(device='hugetlbfs', mountpoint='/dev/hugepages', fstype='hugetlbfs', opts='rw,seclabel,relatime,pagesize=2M', maxfile=255, maxpath=4096), sdiskpart(device='mqueue', mountpoint='/dev/mqueue', fstype='mqueue', opts='rw,seclabel,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='debugfs', mountpoint='/sys/kernel/debug', fstype='debugfs', opts='rw,seclabel,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='tracefs', mountpoint='/sys/kernel/tracing', fstype='tracefs', opts='rw,seclabel,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='fusectl', mountpoint='/sys/fs/fuse/connections', fstype='fusectl', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='ramfs', mountpoint='/run/credentials/systemd-sysctl.service', fstype='ramfs', opts='ro,seclabel,nosuid,nodev,noexec,relatime,mode=700', maxfile=255, maxpath=4096), sdiskpart(device='ramfs', mountpoint='/run/credentials/systemd-sysusers.service', fstype='ramfs', opts='ro,seclabel,nosuid,nodev,noexec,relatime,mode=700', maxfile=255, maxpath=4096), sdiskpart(device='ramfs', mountpoint='/run/credentials/systemd-tmpfiles-setup-dev.service', fstype='ramfs', opts='ro,seclabel,nosuid,nodev,noexec,relatime,mode=700', maxfile=255, maxpath=4096), sdiskpart(device='/dev/mapper/luks-353e522f-c0f3-4167-99fc-90d576a734e8', mountpoint='/var', fstype='btrfs', opts='rw,seclabel,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=256,subvol=/var', maxfile=255, maxpath=4096), sdiskpart(device='/dev/nvme1n1p2', mountpoint='/boot', fstype='ext4', opts='rw,seclabel,relatime', maxfile=255, maxpath=4096), sdiskpart(device='tmpfs', mountpoint='/tmp', fstype='tmpfs', opts='rw,seclabel,nosuid,nodev,size=8073032k,nr_inodes=1048576,inode64', maxfile=255, maxpath=4096), sdiskpart(device='/dev/mapper/luks-353e522f-c0f3-4167-99fc-90d576a734e8', mountpoint='/var/home', fstype='btrfs', opts='rw,seclabel,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=257,subvol=/home', maxfile=255, maxpath=4096), sdiskpart(device='/dev/nvme1n1p1', mountpoint='/boot/efi', fstype='vfat', opts='rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro', maxfile=1530, maxpath=4096), sdiskpart(device='ramfs', mountpoint='/run/credentials/systemd-tmpfiles-setup.service', fstype='ramfs', opts='ro,seclabel,nosuid,nodev,noexec,relatime,mode=700', maxfile=255, maxpath=4096), sdiskpart(device='binfmt_misc', mountpoint='/proc/sys/fs/binfmt_misc', fstype='binfmt_misc', opts='rw,nosuid,nodev,noexec,relatime', maxfile=255, maxpath=4096), sdiskpart(device='ramfs', mountpoint='/run/credentials/systemd-resolved.service', fstype='ramfs', opts='ro,seclabel,nosuid,nodev,noexec,relatime,mode=700', maxfile=255, maxpath=4096), sdiskpart(device='sunrpc', mountpoint='/var/lib/nfs/rpc_pipefs', fstype='rpc_pipefs', opts='rw,relatime', maxfile=255, maxpath=4096), sdiskpart(device='tmpfs', mountpoint='/run/user/1000', fstype='tmpfs', opts='rw,seclabel,nosuid,nodev,relatime,size=1614604k,nr_inodes=403651,mode=700,uid=1000,gid=1000,inode64', maxfile=255, maxpath=4096), sdiskpart(device='portal', mountpoint='/run/user/1000/doc', fstype='fuse.portal', opts='rw,nosuid,nodev,relatime,user_id=1000,group_id=1000', maxfile=None, maxpath=4096), sdiskpart(device='/dev/mapper/luks-6ad1bc21-b2c4-42ee-b830-cc56fbb7d3c7', mountpoint='/run/media/amusing-dove/Backup', fstype='ext4', opts='rw,seclabel,nosuid,nodev,relatime,errors=remount-ro', maxfile=255, maxpath=4096), sdiskpart(device='borgfs', mountpoint='/var/home/amusing-dove/.mnt/old-backup', fstype='fuse', opts='ro,nosuid,nodev,relatime,user_id=1000,group_id=1000,default_permissions', maxfile=255, maxpath=4096)] Umount params {'ok': True, 'password': 'expand anyplace ladies swizzle dill amuck', 'ssh_key': None, 'repo_id': 1, 'repo_url': '/run/media/amusing-dove/Backup/backup-fedora-workstation', 'extra_borg_arguments': '', 'profile_name': 'Default', 'profile_id': 1, 'active_mount_points': ['/var/home/amusing-dove/.mnt/old-backup'], 'cmd': ['borg', 'umount', '--log-json']} ``` (Realized I just leaked my password like a dumbass... well, whatever)
real-yfprojects commented 1 year ago

(Realized I just leaked my password like a dumbass... well, whatever)

Sorry about that. While they are unsafe now, I can still completely redact them from github if you want.

I will look into you results later when I have more time. Thank you for running the debug build.

quazar-omega commented 1 year ago

It's fine, there's no need, it's all local anyway. I can just change it or make a new repo, once it's on the internet it's already compromised after all :)

About the message, just by giving it a mindless skim it doesn't look like there's any specific error reported, is that actually the important part? It is what was output when I clicked the button each time I tried, so I'm wondering if there's more info to give, I'll stay available to help troubleshoot

real-yfprojects commented 1 year ago

About the message, just by giving it a mindless skim it doesn't look like there's any specific error reported, is that actually the important part?

No, it is printing out extra information (that's also why you're passwords appeared there, usually they aren't logged). Hopefully this information will allow me to get closer to the root problem of this issue. However I can't garantee anything.

quazar-omega commented 1 year ago

Got it, thanks for looking into this!

real-yfprojects commented 1 year ago

Ok, I got closer to the problem (at least in your case @quazar-omega). While you seem to have mounted the archive into /home/amusing-dove/.mnt/old-backup, psutil.disk_partitions(all=True) will return the mountpoint /var/home/amusing-dove/.mnt/old-backup. This explains Vorta's behaviour. However I don't know where the /var comes from.

Can you reproduce the bug and check the output of

cat /proc/mounts

which should list all mounts. This will allow determining whether there is an issue with diskutils. If that also outputs /var/home/..., can you check that the archive isn't mounted in /home/amusing-dove/... but only in /var/home/...?

quazar-omega commented 1 year ago

Looked through the output, I see at the end

borgfs /var/home/amusing-dove/.mnt/old-backup fuse ro,nosuid,nodev,relatime,user_id=1000,group_id=1000,default_permissions 0 0

I assume that's what I'm looking for? It is listed with /var, which I can further confirm with the fact that /home is a symbolic link to /var/home:

/ $ ls -l
total 48
lrwxrwxrwx.   4 root root    7 May  1 19:02 bin -> usr/bin
drwxr-xr-x.   7 root root 4096 Sep 10 18:04 boot
drwxr-xr-x.  21 root root 4640 Sep 16 13:22 dev
drwxr-xr-x.   1 root root 4506 Sep 11 12:46 etc
lrwxrwxrwx.   4 root root    8 May  1 19:02 home -> var/home
lrwxrwxrwx.   7 root root    7 May  1 19:02 lib -> usr/lib
lrwxrwxrwx.   7 root root    9 May  1 19:02 lib64 -> usr/lib64
lrwxrwxrwx.   4 root root    9 May  1 19:02 media -> run/media
lrwxrwxrwx.   4 root root    7 May  1 19:02 mnt -> var/mnt
lrwxrwxrwx.   4 root root    7 May  1 19:02 opt -> var/opt
lrwxrwxrwx.   4 root root   14 May  1 19:02 ostree -> sysroot/ostree
dr-xr-xr-x. 466 root root    0 Sep 16  2023 proc
lrwxrwxrwx.   4 root root   12 May  1 19:02 root -> var/roothome
drwxr-xr-x.  51 root root 1300 Sep 16 13:20 run
lrwxrwxrwx.   4 root root    8 May  1 19:02 sbin -> usr/sbin
lrwxrwxrwx.   4 root root    7 May  1 19:02 srv -> var/srv
dr-xr-xr-x.  13 root root    0 Sep 16 13:18 sys
drwxr-xr-x.   1 root root   74 May  1 19:02 sysroot
drwxrwxrwt.  19 root root  460 Sep 16 13:23 tmp
drwxr-xr-x.   1 root root  106 Jan  1  1970 usr
drwxr-xr-x.   1 root root  222 Sep 11 12:46 var

So the archive is mounted on "both", i.e. just the actual path prepended by /var

real-yfprojects commented 1 year ago

I see. So in your case the home directory is linked to /var/home. That's why /proc/mounts reports the mount there. Consequently this issue can be fixed by implementing support for symlinks in the mount path.

@Tom-H-L Does the mount path you reported this issue for contain any hard or symbolic links?