Open bshor opened 5 days ago
But I can't seem to get httm to look at /btrbk on the other drive. /snapshots and /btrbk look the same... a long list of snapshots.
In general, re: ZFS, we have tried to do this via the -a, --alt-replicated
flag, which will auto-discover snapshots of the type rpool/ROOT/ubuntu_w0ha6q
when "backup-ed" to a local data pool, like so data/rpool/ROOT/ubuntu_w0ha6q
or data/something/anotherthing/rpool/ROOT/ubuntu_w0ha6q
.
Re: btrfs, auto-discovery of such pools may not be possible because of how you sent (or copied, which, if you see my next comment, is a no-no!) the datasets. If it were me, I'd first try to be as close as possible to the ZFS topology and see if you can't use -a
, because it won't require any extra legwork on your part like --map-aliases
.
WARN: Mount "/btrbk" appears to have no snapshots available.
Here, it seems httm
is not detecting any snapshot subvolumes at /btrbk
which means -- perhaps you're not doing a btrfs send
to your data
pool, but instead just copying new datasets? Note, httm
will not discover snapshots which aren't snapshot subvolumes. My guess is that this is your problem.
See below, httm
discovers the /btrbk
dataset but does not see any snapshots because snapshots are just raw directories, not btrfs subvolumes (which you can see via btrfs subvolume list -s /path/to/btrfs/volume
):
map_of_snaps: MapOfSnaps {
inner: {
"/backup": [],
"/btrbk": [],
"/data": [
"/snapshots/@data.20240307T0907",
"/snapshots/data.20240804T0000",
"/snapshots/data.20241002T2351",
"/snapshots/data.20241003T2215",
"/snapshots/data.20241004T0015",
"/snapshots/data.20241005T0015",
"/snapshots/data.20241005T1215",
"/snapshots/data.20241005T1245",
"/snapshots/data.20241005T1315",
"/snapshots/data.20241005T1345",
],
"/extra": [],
"/more": [],
"/snapshots": [],
},
},
@bshor very much appreciate you filing a bug report. Especially the included detail.
However, if I am to keep this issue open, I'm going to need you to confirm whether my supposition is correct, that is -- you copied rather than sent the sub-volumes to a new location, or for you to tell me whether perhaps something else is going on.
Again -- I'm very pleased to discuss further, but will need a response from you.
Thanks!
I apologize for the late response, work has been hectic. I will respond in detail in the next 12 hours (replacing this response).
Ok, back on! I'll be much quicker to respond now. I appreciate the very close attention you are giving me!
But I can't seem to get httm to look at /btrbk on the other drive. /snapshots and /btrbk look the same... a long list of snapshots.
In general, re: ZFS, we have tried to do this via the
-a, --alt-replicated
flag, which will auto-discover snapshots of the typerpool/ROOT/ubuntu_w0ha6q
when "backup-ed" to a local data pool, like sodata/rpool/ROOT/ubuntu_w0ha6q
ordata/something/anotherthing/rpool/ROOT/ubuntu_w0ha6q
.
Unfortunately the -a flag doesn't seem to work.
sudo httm -a /data/btrfs_free.sh
NOTICE: Falling back to detection of btrfs snapshot mounts perhaps defined by Snapper re: mount: "/extra"
────────────────────────────────────────────────────────────────────────────────
Sun Jun 09 02:54:59 2024 1.3 KiB "/snapshots/data.20240804T0000/btrfs_free.sh"
Sat Aug 24 01:27:10 2024 1.3 KiB "/snapshots/data.20241002T2351/btrfs_free.sh"
────────────────────────────────────────────────────────────────────────────────
Sat Aug 24 01:27:10 2024 1.3 KiB "/data/btrfs_free.sh"
────────────────────────────────────────────────────────────────────────────────
Re: btrfs, auto-discovery of such pools may not be possible because of how you sent (or copied, which, if you see my next comment, is a no-no!) the datasets. If it were me, I'd first try to be as close as possible to the ZFS topology and see if you can't use
-a
, because it won't require any extra legwork on your part like--map-aliases
.WARN: Mount "/btrbk" appears to have no snapshots available.
Here, it seems
httm
is not detecting any snapshot subvolumes at/btrbk
which means -- perhaps you're not doing abtrfs send
to yourdata
pool, but instead just copying new datasets? Note,httm
will not discover snapshots which aren't snapshot subvolumes. My guess is that this is your problem.
Ok, so /btrbk is definitely the TARGET of send operations which is executed by btrbk. The whole point of btrbk is to unify taking snapshots and to send them to a backup location (here, /btrbk). Here's my btrbk configuration (/etc/btrbk/btrbk.conf):
timestamp_format long
transaction_log /var/log/btrbk.log
lockfile /var/lock/btrbk.lock
send_compressed_data yes
snapshot_preserve_min 2h
snapshot_preserve 3d 2w 3m
target_preserve_min 4h
target_preserve 7d 6w 9m
volume /data
snapshot_dir /snapshots
subvolume /data
target /btrbk
See below,
httm
discovers the/btrbk
dataset but does not see any snapshots because snapshots are just raw directories, not btrfs subvolumes (which you can see viabtrfs subvolume list -s /path/to/btrfs/volume
):map_of_snaps: MapOfSnaps { inner: { "/backup": [], "/btrbk": [], "/data": [ "/snapshots/@data.20240307T0907", "/snapshots/data.20240804T0000", "/snapshots/data.20241002T2351", "/snapshots/data.20241003T2215", "/snapshots/data.20241004T0015", "/snapshots/data.20241005T0015", "/snapshots/data.20241005T1215", "/snapshots/data.20241005T1245", "/snapshots/data.20241005T1315", "/snapshots/data.20241005T1345", ], "/extra": [], "/more": [], "/snapshots": [], }, },
Hmmm. While I don't fully understand this debug output, I'm trying to express that /data is my live data, and that /snapshots is where snapshots of /data go, while /btrbk is where backup snapshots are sent to. Those backup snapshots are very useful because they live a lot longer than in /snapshots.
I guess what I'm confused by is why don't we see entries that look like /btrbk/data.20240804T0000
underneath /data up above. Seems like httm is only detecting the snapshots in /snapshots.
Just to show you, here's the contents of /btrbk:
And the contents of /snapshots:
Thank you again for your attention and patience!
In general, re: ZFS, we have tried to do this via the
-a, --alt-replicated
flag, which will auto-discover snapshots of the typerpool/ROOT/ubuntu_w0ha6q
when "backup-ed" to a local data pool, like sodata/rpool/ROOT/ubuntu_w0ha6q
ordata/something/anotherthing/rpool/ROOT/ubuntu_w0ha6q
.Unfortunately the -a flag doesn't seem to work.
Yes, as noted, you would first need to rearrange your snapshots to be in the form of a ZFS snapshot topology.
For now, I am supposing this is not possible for you (you're using btrbk, and this is how btrbk does things), and we will move on. You will need to, in this case, to use --map-aliases
to view snapshots on a different dataset.
Ok, so /btrbk is definitely the TARGET of send operations which is executed by btrbk. The whole point of btrbk is to unify taking snapshots and to send them to a backup location (here, /btrbk). Here's my btrbk configuration (/etc/btrbk/btrbk.conf):
timestamp_format long transaction_log /var/log/btrbk.log lockfile /var/lock/btrbk.lock send_compressed_data yes snapshot_preserve_min 2h snapshot_preserve 3d 2w 3m target_preserve_min 4h target_preserve 7d 6w 9m volume /data snapshot_dir /snapshots subvolume /data target /btrbk
Hmmm. While I don't fully understand this debug output, I'm trying to express that /data is my live data, and that /snapshots is where snapshots of /data go, while /btrbk is where backup snapshots are sent to. Those backup snapshots are very useful because they live a lot longer than in /snapshots.
I am no expert in btrbk
but, from what I understand of your config file, it should be sending instead of copying your snapshots to the /btrbk
dataset. So, it would seem that my first supposition/explanation was incorrect.
The question is then: Why doesn't httm
see those snapshots? httm
only includes sub-volumes known to btrfs
. It does not concern itself with your btrbk configuration because it doesn't know about your btrbk configuration. Therefore, it can't know the directories in /btrbk
are sub-volumes unless the btrfs
command indicates as much (/brbk
might contain other files and directories which aren't sub-volumes, etc.).
So -- I would check sudo btrfs subvolume list -s /btrbk
and sudo btrfs subvolume show /btrbk
to see what each says.
It is possible that even after btrfs sends a backup it still relates that backup with the same sub-volume, and never re-associates it with a new dataset, like ZFS. To the extent btrfs is just weird, and the behavior is not well-defined, it may be hard to help you, but maybe this is logical in btrfs world where a snapshots is sent to a new dataset on the same machine?
It's also possible your "backups" are privileged, whereas your "snapshots" are not, and you simply need to use sudo
in conjunction with --map-aliases
to view them, see: https://github.com/kimono-koans/httm?tab=readme-ov-file#example-usage
Note: Users may need to use sudo (or equivalent) to view versions on BTRFS or NILFS2 datasets, or Restic repositories, as BTRFS or NILFS2 snapshots or Restic repositories may require root permissions in order to be visible. Restic and Time Machine backups also require an additional flag, see further discussion of Restic --alt-store in the below.
Thank you again for writing. I feel bad for taking so much of your time for something that appears to be a BTRFS idiosyncracy -- but since you want your tool to work in BTRFS and btrbk is VERY common for BTRFS file system enthusiasts, maybe it's a useful exercise.
I'd love to try --map-aliases but as you can see above I tried my best and couldn't figure it out.
sudo httm --map-aliases /data:/btrbk /data/btrfs_free.sh
NOTICE: Falling back to detection of btrfs snapshot mounts perhaps defined by Snapper re: mount: "/extra"
────────────────────────────────────────────────────────────────────────────────
Sun Jun 09 02:54:59 2024 1.3 KiB "/snapshots/data.20240804T0000/btrfs_free.sh"
Sat Aug 24 01:27:10 2024 1.3 KiB "/snapshots/data.20241002T2351/btrfs_free.sh"
────────────────────────────────────────────────────────────────────────────────
Sat Aug 24 01:27:10 2024 1.3 KiB "/data/btrfs_free.sh"
────────────────────────────────────────────────────────────────────────────────
subvolume list -s /btrbk shows (clipping a bunch for being concise:
ID 988 gen 4928 cgen 4924 top level 257 otime 2024-04-07 00:00:03 path data.20240407T0000
ID 1610 gen 8634 cgen 8631 top level 257 otime 2024-05-09 16:00:09 path data.20240505T0001
ID 1940 gen 17038 cgen 17035 top level 257 otime 2024-10-11 09:20:03 path data.20241011T0920
ID 1941 gen 17041 cgen 17038 top level 257 otime 2024-10-11 09:40:02 path data.20241011T0940
ID 1942 gen 17045 cgen 17041 top level 257 otime 2024-10-11 10:00:04 path data.20241011T1000
ID 1943 gen 17045 cgen 17045 top level 257 otime 2024-10-11 10:20:04 path data.20241011T1020
sudo btrfs subvolume show /btrbk shows:
Name: @btrbk
UUID: e758aa42-c218-cd4d-8e9b-ddead514b91e
Parent UUID: -
Received UUID: -
Creation time: 2024-03-01 23:53:39 -0600
Subvolume ID: 257
Generation: 17045
Gen at creation: 8
Parent ID: 5
Top level ID: 5
Flags: -
Send transid: 0
Send time: 2024-03-01 23:53:39 -0600
Receive transid: 0
Receive time: -
Snapshot(s):
Misc points:
I'm using httm 0.43.2 on a Debian 12 system, with a BTRFS file system and btrbk for creating snapshots. It works perfectly for showing the older versions living in my snapshots directory, which is located at
@snapshots
and mounted on /snapshots. My data is located at@data
and mounted on /data. On a separate drive, I have@btrbk
mounted on /btrbk. This is where I send "backup" snapshots.For my local drive, these are both top-level subvolumes:
For my backup drive it is as well:
ID 257 gen 16551 top level 5 path @btrbk
Here's an example of it working as expected using snapshots in /snapshots.
But I can't seem to get httm to look at /btrbk on the other drive.
I tried map-aliases but no luck:
/snapshots and /btrbk look the same... a long list of snapshots.
I'm guessing that httm is trying to look in obvious places for snapshots, and doesn't think to look in some other place for them. Which is fine -- but is there a way to tell it to look there?
Here's the debug output. Interesting that it appears to believe /btrbk is a live subvolume that might have its own snapshots.