Closed unphased closed 1 month ago
A basic conceptual question if you'll indulge me...
I came here looking for a way to find all the files i would lose if i were to nuke my snapshots. this is a really important thing to check and I dont know why yours is the only tool that can do this easily.
I found I can do it via httm -b -d=only -R /vat
Great, now my question is I do have a time machine target I set in a directory that has been running for years, I seem to have made a few snapshots of the time machine dir, and I believe it's saying 1.2T are referenced by these snapshots.
My question is ... time machine snapshots are not zfs snapshots? They are snapshots implemented in the .sparsebundle/bands/<hex>
files, certainly.
So therefore I am already 99.99% confident that i can safely nuke the snapshots retaining any time machine sparsebundle content, because any shapshots time machine itself relies upon are already going to be present state of this zfs pool.
Just asking for a sanity-check/rubber duck.
btw the way it pulled up all the deleted files with the above command was really fast and awesome, so I'll definitely be using this tool from now on.
wow i can see that as i am copying files in that the listing shown in the preview of httm is live! That's really nice.
fzf for the win.
it produces insane output but luckily once i ran it outside of tmux i could see the output with mostly numbers streaming out was the number in the fzf command line getting garbled.
LMK if you have trouble reproducing.
httm
works in tmux
on my machine? Perhaps you need to clear
your terminal. FYI, httm
doesn't use fzf
.
I came here looking for a way to find all the files i would lose if i were to nuke my snapshots. this is a really important thing to check and I dont know why yours is the only tool that can do this easily.
You want to also try deleted recursive mode:
httm -d -R ~
See the README at: https://github.com/kimono-koans/httm?tab=readme-ov-file#example-usage
Print all files on snapshots deleted from your home directory, recursive, newline delimited, piped to a text file:
# pseudo live file versions
➜ httm -d -n -R --no-snap ~ > pseudo-live-versions.txt
# unique snapshot versions
➜ httm -d -n -R --no-live ~ > deleted-unique-versions.txt
Great, now my question is I do have a time machine target I set in a directory that has been running for years, I seem to have made a few snapshots of the time machine dir, and I believe it's saying 1.2T are referenced by these snapshots. My question is ... time machine snapshots are not zfs snapshots? They are snapshots implemented in the .sparsebundle/bands/
files, certainly.
You'll have to make your question clearer. httm
works with Time Machine backups as well as ZFS snapshots. Again, see Example Usage.
So therefore I am already 99.99% confident that i can safely nuke the snapshots retaining any time machine sparsebundle content, because any shapshots time machine itself relies upon are already going to be present state of this zfs pool.
Yes, as you have stated here, this is a tautology or -- how I am aware that ZFS works. Destroying snapshots does not erase any current state of the filesystem.
As a Mac user, however, I'd make sure to verify my Time Machine Backups before nuking any of my ZFS snapshots, and I'd make sure my Time Machine reached as far back as I wanted them to reach (perhaps by checking with tmutil listbackups
).
See: https://support.apple.com/guide/mac-help/verify-your-backup-disk-mh26840/mac
Just asking for a sanity-check/rubber duck.
Afraid I can't provide for you. To see how I use Time Machine and ZFS, see my blog entry: https://kimono-koans.github.io/opinionated-guide/#on-network-mount
it produces insane output but luckily once i ran it outside of tmux i could see the output with mostly numbers streaming out was the number in the fzf command line getting garbled.
LMK if you have trouble reproducing.
https://github.com/user-attachments/assets/c36f35c6-6406-4142-8ab1-205e14dd83a9