Closed QuarkZ26 closed 9 months ago
WARNING: No changes were detected, aborting...
That's to be expected since, in the meantime, no new snapshots were added and no present ones were deleted.
And this continues until I reboot. Any idea as to why it's happening?
It's as if something changed with the mtab between the two runs. I've not seen this happen before, to anyone. What happens when you start the refind-btrfs
systemd service and check the output of journalctl -u refind-btrfs -b
? Did you ever start the service?
I'd be interesting to compare the output of:
lsblk --merge --paths --output NAME,TYPE,MAJ:MIN
lsblk /dev/nvme0n1 --paths --tree --output PTUUID,PTTYPE,PARTUUID,PARTTYPE,PARTLABEL,UUID,NAME,FSTYPE,LABEL,MOUNTPOINT
findmnt --mtab --real --nofsroot --output PARTUUID,PARTLABEL,UUID,SOURCE,FSTYPE,LABEL,TARGET,OPTIONS
immediately after a successful boot and then after a manual run of refind-btrfs
.
Thanks, this helped. It turns out that BTRFS Assistant creates a new mountpoint when creating a manual snapshot, and it's what is reported by lsblk
{
"ptuuid": "2e39261b-712a-4a77-95e7-bd50445f9df5",
"pttype": "gpt",
"partuuid": "3c92f407-53af-464d-8a3a-160f633174f9",
"parttype": "0fc63daf-8483-4772-8e79-3d69d8477de4",
"partlabel": null,
"uuid": "15893a4f-4744-4d40-aea8-e1ee8c2e5033",
"name": "/dev/nvme0n1p8",
"fstype": "btrfs",
"label": "EOS",
"mountpoint": "/run/BtrfsAssistant/15893a4f-4744-4d40-aea8-e1ee8c2e5033"
}
But findmt shows that / still exists
PARTUUID PARTLABEL UUID SOURCE FSTYPE LABEL TARGET OPTIONS
3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033
btrfs EOS / rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=
mergerfs fuse.mergerfs /mnt/Downloads rw,relatime,user_id=0,group_id=0,default_permissions,allow_other
3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033
btrfs EOS /home rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=
049e207c-e7d3-49f5-994a-669039b27462 CC50-B272 /dev/nvme0n1p7 vfat BOOT /efi rw,noatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,err
13729281-efec-427d-bf81-144a371e9595 6bd48411-76e1-4dbc-b972-3ea153a9f9b3 /dev/sda1 ext4 SSD /mnt/SSD rw,noatime,nodiratime
9715d8c9-419b-48c3-af93-d08b280c8010 c1db8a11-1485-4d58-86a0-52678e992477 /dev/nvme0n1p5 ext4 Games /mnt/Games rw,relatime
f164c2ba-dfcd-48ee-9e38-0f1b2f7aad87 c08e6cd2-1a49-4b2f-a6c8-f61cde578c1b /dev/sdf1 ext4 HDD4 /mnt/HDD/HDD4 rw,relatime
29efda9f-8934-4941-a2e1-17ae1ae178c6 bdbf58de-be82-4096-8dcb-6a8d74c6bf50 /dev/sdc1 ext4 HDD1 /mnt/HDD/HDD1 rw,relatime
4dcf226e-5b5b-44cf-adf4-c2015183c3a1 1ed55621-4720-455f-a379-033f78cdb181 /dev/sdd1 ext4 HDD2 /mnt/HDD/HDD2 rw,relatime
6b9e6731-785e-47d6-ae30-a1c368e8d358 b7afc6dd-762e-4051-bfaa-3534a964a4ba /dev/sde1 ext4 HDD3 /mnt/HDD/HDD3 rw,relatime
0bf70799-f4f8-4e96-a905-8f83944d2a5d 3f6b1e32-f834-4be9-a952-5c5454d928e9 /dev/sdb1 ext4 Backups /mnt/Backups rw,relatime
portal fuse.portal /run/user/1000/doc rw,nosuid,nodev,relatime,user_id=1000,group_id=1000
3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033
btrfs EOS /run/BtrfsAssistant/15893a4f-4744-4d40-aea8-e1ee8c2e5033
rw,relatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid
So shouldn't refind-btrfs find it still?
This is a bit of a bizarre behavior, I might have to ask them about it, I don't really need to do manual snapshots anyway.
With that said, I started the service, cause I hadn't before, and it doesn't seem to be picking up changes at all, it just stays there
Nov 05 08:47:57 QuarkZ-Linux /usr/lib/python3.11/site-packages/refind_btrfs/__main__.py[10948]: Scheduling watch for directories: /.snapshots, /.snapshots/1, /.snapshots/105, /.snapshots/106, /.snapshots/107, /.snapshots/108, /.snapshots/109, /.snapshots/110, /.snap>
Nov 05 08:47:57 QuarkZ-Linux /usr/lib/python3.11/site-packages/refind_btrfs/__main__.py[10948]: Starting the observer with PID 10948.
Nov 05 08:47:57 QuarkZ-Linux systemd[1]: Started Generate rEFInd manual boot stanzas from Btrfs snapshots.
I went check the stanza to confirm that it didn't change since the last manual run I did.
But findmt shows that / still exists
I can't see the "subvolid" values. "subvol" and/or "subvolid" options should be present in the output. Either way, I can't really be expected to support every single piece of software which somehow manipulates the mtab (messing with it, to be honest).
With that said, I started the service, cause I hadn't before, and it doesn't seem to be picking up changes at all, it just stays there
This seems to be related to a pretty old and, as of yet, unresolved issue. This output was provided by journalctl
?
It was in journalctl yes, though I'm trying to run the program manually, and I'm back to the original error, even though the mountpoint hasn't changed this time
PARTUUID PARTLABEL UUID SOURCE FSTYPE LABEL TARGET OPTIONS
3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033 btrfs EOS / rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=2
mergerfs fuse.mergerfs /mnt/Downloads rw,relatime,user_id=0,group_id=0,default_permissions,allow_other
3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033 btrfs EOS /home rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=2
049e207c-e7d3-49f5-994a-669039b27462 CC50-B272 /dev/nvme0n1p7 vfat BOOT /efi rw,noatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,erro
13729281-efec-427d-bf81-144a371e9595 6bd48411-76e1-4dbc-b972-3ea153a9f9b3 /dev/sda1 ext4 SSD /mnt/SSD rw,noatime,nodiratime
9715d8c9-419b-48c3-af93-d08b280c8010 c1db8a11-1485-4d58-86a0-52678e992477 /dev/nvme0n1p5 ext4 Games /mnt/Games rw,relatime
f164c2ba-dfcd-48ee-9e38-0f1b2f7aad87 c08e6cd2-1a49-4b2f-a6c8-f61cde578c1b /dev/sdf1 ext4 HDD4 /mnt/HDD/HDD4 rw,relatime
29efda9f-8934-4941-a2e1-17ae1ae178c6 bdbf58de-be82-4096-8dcb-6a8d74c6bf50 /dev/sdc1 ext4 HDD1 /mnt/HDD/HDD1 rw,relatime
4dcf226e-5b5b-44cf-adf4-c2015183c3a1 1ed55621-4720-455f-a379-033f78cdb181 /dev/sdd1 ext4 HDD2 /mnt/HDD/HDD2 rw,relatime
6b9e6731-785e-47d6-ae30-a1c368e8d358 b7afc6dd-762e-4051-bfaa-3534a964a4ba /dev/sde1 ext4 HDD3 /mnt/HDD/HDD3 rw,relatime
0bf70799-f4f8-4e96-a905-8f83944d2a5d 3f6b1e32-f834-4be9-a952-5c5454d928e9 /dev/sdb1 ext4 Backups /mnt/Backups rw,relatime
lsblk is showing this
{
"ptuuid": "2e39261b-712a-4a77-95e7-bd50445f9df5",
"pttype": "gpt",
"partuuid": "3c92f407-53af-464d-8a3a-160f633174f9",
"parttype": "0fc63daf-8483-4772-8e79-3d69d8477de4",
"partlabel": null,
"uuid": "15893a4f-4744-4d40-aea8-e1ee8c2e5033",
"name": "/dev/nvme0n1p8",
"fstype": "btrfs",
"label": "EOS",
"mountpoint": "/home"
}
though this is what is shown at startup and I get no issues at that point.
Now I rebooted and getting a completely different error when running it manually
Searching for snapshots of the '@' subvolume in the '/.snapshots' directory.
Found subvolume '@' mounted as the root partition.
ERROR (refind_btrfs.state_management.conditions/conditions.py/check_root_subvolume): No snapshots of the '@' subvolume were found!
It was in journalctl yes, though I'm trying to run the program manually, and I'm back to the original error, even though the mountpoint hasn't changed this time
PARTUUID PARTLABEL UUID SOURCE FSTYPE LABEL TARGET OPTIONS 3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033 btrfs EOS / rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=2 mergerfs fuse.mergerfs /mnt/Downloads rw,relatime,user_id=0,group_id=0,default_permissions,allow_other 3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033 btrfs EOS /home rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=2 049e207c-e7d3-49f5-994a-669039b27462 CC50-B272 /dev/nvme0n1p7 vfat BOOT /efi rw,noatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,erro 13729281-efec-427d-bf81-144a371e9595 6bd48411-76e1-4dbc-b972-3ea153a9f9b3 /dev/sda1 ext4 SSD /mnt/SSD rw,noatime,nodiratime 9715d8c9-419b-48c3-af93-d08b280c8010 c1db8a11-1485-4d58-86a0-52678e992477 /dev/nvme0n1p5 ext4 Games /mnt/Games rw,relatime f164c2ba-dfcd-48ee-9e38-0f1b2f7aad87 c08e6cd2-1a49-4b2f-a6c8-f61cde578c1b /dev/sdf1 ext4 HDD4 /mnt/HDD/HDD4 rw,relatime 29efda9f-8934-4941-a2e1-17ae1ae178c6 bdbf58de-be82-4096-8dcb-6a8d74c6bf50 /dev/sdc1 ext4 HDD1 /mnt/HDD/HDD1 rw,relatime 4dcf226e-5b5b-44cf-adf4-c2015183c3a1 1ed55621-4720-455f-a379-033f78cdb181 /dev/sdd1 ext4 HDD2 /mnt/HDD/HDD2 rw,relatime 6b9e6731-785e-47d6-ae30-a1c368e8d358 b7afc6dd-762e-4051-bfaa-3534a964a4ba /dev/sde1 ext4 HDD3 /mnt/HDD/HDD3 rw,relatime 0bf70799-f4f8-4e96-a905-8f83944d2a5d 3f6b1e32-f834-4be9-a952-5c5454d928e9 /dev/sdb1 ext4 Backups /mnt/Backups rw,relatime
lsblk is showing this
{ "ptuuid": "2e39261b-712a-4a77-95e7-bd50445f9df5", "pttype": "gpt", "partuuid": "3c92f407-53af-464d-8a3a-160f633174f9", "parttype": "0fc63daf-8483-4772-8e79-3d69d8477de4", "partlabel": null, "uuid": "15893a4f-4744-4d40-aea8-e1ee8c2e5033", "name": "/dev/nvme0n1p8", "fstype": "btrfs", "label": "EOS", "mountpoint": "/home" }
though this is what is shown at startup and I get no issues at that point.
findmnt's output is also crucial, it's not just lsblk
.
It was in journalctl yes, though I'm trying to run the program manually, and I'm back to the original error, even though the mountpoint hasn't changed this time
PARTUUID PARTLABEL UUID SOURCE FSTYPE LABEL TARGET OPTIONS 3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033 btrfs EOS / rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=2 mergerfs fuse.mergerfs /mnt/Downloads rw,relatime,user_id=0,group_id=0,default_permissions,allow_other 3c92f407-53af-464d-8a3a-160f633174f9 15893a4f-4744-4d40-aea8-e1ee8c2e5033 /dev/disk/by-uuid/15893a4f-4744-4d40-aea8-e1ee8c2e5033 btrfs EOS /home rw,noatime,compress-force=zstd:3,ssd,discard=async,space_cache=v2,autodefrag,subvolid=2 049e207c-e7d3-49f5-994a-669039b27462 CC50-B272 /dev/nvme0n1p7 vfat BOOT /efi rw,noatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,erro 13729281-efec-427d-bf81-144a371e9595 6bd48411-76e1-4dbc-b972-3ea153a9f9b3 /dev/sda1 ext4 SSD /mnt/SSD rw,noatime,nodiratime 9715d8c9-419b-48c3-af93-d08b280c8010 c1db8a11-1485-4d58-86a0-52678e992477 /dev/nvme0n1p5 ext4 Games /mnt/Games rw,relatime f164c2ba-dfcd-48ee-9e38-0f1b2f7aad87 c08e6cd2-1a49-4b2f-a6c8-f61cde578c1b /dev/sdf1 ext4 HDD4 /mnt/HDD/HDD4 rw,relatime 29efda9f-8934-4941-a2e1-17ae1ae178c6 bdbf58de-be82-4096-8dcb-6a8d74c6bf50 /dev/sdc1 ext4 HDD1 /mnt/HDD/HDD1 rw,relatime 4dcf226e-5b5b-44cf-adf4-c2015183c3a1 1ed55621-4720-455f-a379-033f78cdb181 /dev/sdd1 ext4 HDD2 /mnt/HDD/HDD2 rw,relatime 6b9e6731-785e-47d6-ae30-a1c368e8d358 b7afc6dd-762e-4051-bfaa-3534a964a4ba /dev/sde1 ext4 HDD3 /mnt/HDD/HDD3 rw,relatime 0bf70799-f4f8-4e96-a905-8f83944d2a5d 3f6b1e32-f834-4be9-a952-5c5454d928e9 /dev/sdb1 ext4 Backups /mnt/Backups rw,relatime
lsblk is showing this
{ "ptuuid": "2e39261b-712a-4a77-95e7-bd50445f9df5", "pttype": "gpt", "partuuid": "3c92f407-53af-464d-8a3a-160f633174f9", "parttype": "0fc63daf-8483-4772-8e79-3d69d8477de4", "partlabel": null, "uuid": "15893a4f-4744-4d40-aea8-e1ee8c2e5033", "name": "/dev/nvme0n1p8", "fstype": "btrfs", "label": "EOS", "mountpoint": "/home" }
though this is what is shown at startup and I get no issues at that point.
findmnt's output is also crucial, it's not just
lsblk
.
findmnt's output is right above
Now I rebooted and getting a completely different error when running it manually
Searching for snapshots of the '@' subvolume in the '/.snapshots' directory. Found subvolume '@' mounted as the root partition. ERROR (refind_btrfs.state_management.conditions/conditions.py/check_root_subvolume): No snapshots of the '@' subvolume were found!
Well, are there any snapshots contained within that directory? If there are, make sure that their parent UUID's are matched with the currently mounted root subvolume.
findmnt's output is right above
It's pretty hard for me to guess what might be wrong but it does look like something similar to this unresolved issue.
Now I rebooted and getting a completely different error when running it manually
Searching for snapshots of the '@' subvolume in the '/.snapshots' directory. Found subvolume '@' mounted as the root partition. ERROR (refind_btrfs.state_management.conditions/conditions.py/check_root_subvolume): No snapshots of the '@' subvolume were found!
Well, are there any snapshots contained within that directory? If there are, make sure that their parent UUID's are matched with the currently mounted root subvolume.
The directory contains the same snapshots as earlier, plus the new timeline ones that were created by snapper every hour. Nothing changed at all since I was able to run the program successfully after reboot. I didn't start the Assistant at all to avoid any change in mountpoint, so I'm not sure why I'm getting this error all of a sudden. I can maybe try and delete all the snapshots and start from scratch, but it seems odd that this error started
As far as the service, since it's unresolved at this point, I should just forget about it? I don't mind running things manually, really I intend to mostly use snapshots for pacman updates.
Try deleting the "local_db" file in the /var/lib/refind-btrfs directory. It acts as sort of a cache. Other than that, I have no idea why there'd be no found snapshots, all of a sudden. I use libbrtsutil to gather information about subvolumes, snapshots and such.
EDIT: Did you confirm that the "@" really is the parent subvolume of these snapshots? Its own UUID must be the same as any given snapshot's parent UUID.
Actually it was my fault, I changed the depth to 1 as I figured it would only look for snapshots directly under @, but of course I didn't think about the numbered folder. duh
So it is now working manually, I guess i'll just avoid btrfs-assistant and config everything on CLI, since apparently, just opening the software changes that mountpoint. I will say, it is a pretty popular software for people using snapshots, but I understand if you don't care supporting it. I will still check with them about that behavior as I really don't understand it.
Just to confirm, should I just forget about the service?
Actually it was my fault, I changed the depth to 1 as I figured it would only look for snapshots directly under @, but of course I didn't think about the numbered folder. duh
Yes, the default config is set up to work with Snapper.
Just to confirm, should I just forget about the service?
Well, since you changed the search depth back to what it was I don't see why you wouldn't give it another try.
I did try again, but it indeed doesn't seem to be working, journalctl returns
Nov 05 10:54:36 QuarkZ-Linux /usr/lib/python3.11/site-packages/refind_btrfs/__main__.py[4957]: Scheduling watch for directories: /.snapshots, /.snapshots/1.
Nov 05 10:54:36 QuarkZ-Linux /usr/lib/python3.11/site-packages/refind_btrfs/__main__.py[4957]: Starting the observer with PID 4957.
Nov 05 10:54:36 QuarkZ-Linux systemd[1]: Started Generate rEFInd manual boot stanzas from Btrfs snapshots.
But nothing effectively happens
Yeah, that's too bad. I don't want to use the watchdog library but there really is no elegant alternative, for now.
Weird indeed. I went through that other thread with the guy with the same problem. I think given my setup and being it mostly for updates, creating a service that runs refind-btrfs at shutdown will work just fine.
I will go ahead and close this, appreciate the time once again!
Hello, me again.
So I'm having this issue where eventually I get this error message. It seems to happen after a new snapshot is created, though it could be circumstantial at this point.
Here's what I get after running the program right after reboot
Now if I run it again after that
Then I went and created a manual snapshot, ran the program again...
And this continues until I reboot. Any idea as to why it's happening?