qarmin / czkawka

Multi functional app to find duplicates, empty folders, similar images etc.
Other
19.97k stars 653 forks source link

Find duplicate files without reading their full content #49

Open kornelski opened 4 years ago

kornelski commented 4 years ago

You could use approach of https://github.com/kornelski/dupe-krill to hash only as little as necessary, instead of hashing whole files.

qarmin commented 3 years ago

I tried to read and understand what it going on with lazy hashing, but I failed because for now seems that I only can read my own code.

But if I correctly understand it opens n files which are in group with same size and read part of file, hashes it and compare it with other partial hashes. Next throw out unique hashes and repeat everything until data ends.

Looks that this should be very fast solution but isn't suitable for current Czkawka version:

kornelski commented 3 years ago

It doesn't have to keep the files open. It's fine to close and reopen them when needed.

The key insight is that if you split a file into multiple hashes (array of hashes), and put these multi-hashes in a tree (btree or binary tree), then you don't need to know all of the hashes at once. You only need to compare them as bigger/smaller. This means you can stop reading files as soon as you find a difference. And when all of the files are in the same tree, you only compare the file against minimum number of other files as you go down the tree, and you only need to compare minimum number of hashes.

x2es commented 2 years ago

I'd like to link this thread with this idea https://github.com/qarmin/czkawka/issues/640