Zygo / bees

Best-Effort Extent-Same, a btrfs dedupe agent
GNU General Public License v3.0
625 stars 56 forks source link

[Feature Request] Beesd to run a full dedup cycle and then end #279

Open JaviVilarroig opened 3 months ago

JaviVilarroig commented 3 months ago

I have a script that creates a backup on external btrfs hard drive.

Once the backup is done, I will like to have the option to run a dedup cycle to remove data redundancy and then umount the volume.

Currently I must manually monitor the journal to wait for the dedup run end before unmounting the volume.

Thanks!

kakra commented 3 months ago

Just let it run for 1-2 hours, bees is best effort, it has no concept of a full cycle because while it modifies the file system, it'll add new transactions, thus generate new cycles.

Then inspect your journals if 1-2 hours is enough. If bees falls behind, you'd need to increase the time.

See man timeout.

JaviVilarroig commented 3 months ago

In this use case, we are talking of a filesystem that is only receiving the backup and is having no more activity. So, bees is ending it's cycle and all the crawlers end.

No more transactions, no more activity.

I can see that in the logs. But I want to automate the backup script to umount the filesystem.

I know it's a niche use case, but I think it's legitimate :)

kakra commented 3 months ago

In this use case, we are talking of a filesystem that is only receiving the backup and is having no more activity.

In this case, it may actually work.

How do you watch the logs? Maybe we could implement something like this in the beesd wrapper?

JaviVilarroig commented 3 months ago

I just look at journalctl until I get something similar to this.

mar 31 15:51:14 gondor beesd[50771]: crawl_more[50806]: crawl_more ran out of data after 0.326411s
mar 31 15:51:15 gondor beesd[50771]: crawl_writeback[50815]: Saved crawl state in 0.256s

After that, there's no more activite, that's why I think that must be possible.

I think that adding a flag to the bees binary that makes it quit when runs out of data will do the trick.

I can help with writing a new launcher script or updating the existing one, if needed.

Zygo commented 3 months ago

The condition is something like hitting the "ran out of data" condition twice in a row on every crawler, without encountering any new extents in between in any crawler. The trick is that a new extent almost inevitably appears as a result of bees's own activity, so the condition is never met. Maybe something like "fewer than N extents" where N is a command-line option would work.

In BeesRoots::crawl_thread we could add a check to see if any root exists with m_finished false after the first time the crawl_more task runs. If there is, keep looping; otherwise, send SIGTERM to pid 0 to trigger the termination code. Technically that will terminate too early, but if you're going to run bees again the following day, then all the second-pass work will be done the next day, with yesterday's bees's pass 2 data mixed with today's bees's pass 1. There can also be false positives as technically there is nothing synchronizing the state of the crawlers (i.e. a crawler could restart and find new extents while you're iterating over the crawler list).

Somewhere in the issues here, there's a clever script that measures the amount of IO that bees does, and if that drops to zero reads for a few seconds, it terminates the bees process.

JaviVilarroig commented 3 months ago

Hummm. I can try the script measuring IO activity.

Thanks for the idea.