pixelb / fslint

Linux file system lint checker/cleaner
319 stars 72 forks source link

Feature: duplicate detection algorithm #145

Open emergie opened 6 years ago

emergie commented 6 years ago

Resolves #141

This change introduces Duplicate detection radio in which the user may choose suitable algorithm:

image

warning tooltip: image

fast test:

dd if=/dev/urandom of=random_1M_a bs=1M count=1
dd if=/dev/urandom of=random_1M_b bs=1M count=1
dd if=/dev/urandom of=random_1M_c bs=1M count=1
cat random_1M_a random_1M_b > sample_ab
cat random_1M_a random_1M_c > sample_ac
cp sample_ab sample_ab\ \(co\   p\"y\'\]

default md5&sha1: image

unsafe md5 of first 1M: image

pixelb commented 6 years ago

Thanks very much for taking the time to provide patches. Have you timed the various modes on your data?

I presume you're not hitting a bottleneck from md5+sha1, since those combined would be less than the I/O bottleneck. Especially on spinning rust.

The shortcut of only checking the first 1MiB of each file could save a lot of course. Do you have many large files that are the same size? Did you notice the md5sum_approx script that's called by findup already? Would you can the same benefit from bumping the 512 up to 1048576 there? In retrospect, 512 is too small for modern systems anyway

emergie commented 6 years ago
  1. I haven't done any proper performance testing, but I'm actively battletesting this change on my data.

  2. Yes, I'm hitting the I/O bottleneck. Here is a sample of my search results, only files >=500M: selection_037 Those files are accessed via ext4 -> encryption layer (LUKS) -> iscsi over 1GbE link to another box -> md raid1 -> hard disks. Pumping that amount of data through this setup to calculate md5+sha1 would take ages.

  3. I've seen md5sum_approx code. For my needs hashing only the first 512bytes is not very useful as a main duplicate verification algorithm. Additionally this check doesn't take file size into the account. In fslint every sieving step is independent. If A & B have sizes == n1 and C & D have sizes == n2 then all of them will pass the findup/print name, dev, inode & size step as potential duplicates. If A and C happen to have the same content on the first 512 bytes then the md5sum_approx step will match them despite of different sizes. I do have such samples in my data and that is why I added both file size & md5 hash printing in file_size_1m_md5sum.

  4. 1M data sample size was chosen arbitrarily, mostly because it's round and fits my needs. I'm not sure if it is a best choice for everyone (probably not), but it is something to start with.

  5. I'm wondering if I placed UI controls in the right place. Right now it is a radio in Advanced search parameters tab. Maybe it should be in Duplicates tab, a dropdown/comobobox right of Minimum file size input?

pixelb commented 6 years ago

Right md5sum_approx is only used to quickly exclude potential duplicates. If you bump up the 512 -> 1MiB it might help exclude more, while still being safe. I.E. current sieving steps are:

hard_links -> file_size -> md5(512) -> md5(all) -> sha1(all)

You're proposing:

hard_links -> file_size -> md5(1MiB)

BTW are there many hardlinked files but were sure they were always in disparate groups, then you could enable merge_early in the findup script to improve sieving to a single member of each hardlink group

emergie commented 6 years ago

Yes, that is my proposition from this pull request - to add 2 modes:

Original fslint behaviour would be preserved as default - after changes from this PR md5+sha1 pass is the default mode of duplicate verification.