Open ashvardanian opened 8 months ago
Reading the readme, it doesn't mention about processing files that are compressed. Of course, the file can be decompressed first in some other way, but it would be nice to have a way to process a compressed file without having to load it first in memory. Let me be more specific, here is how you could process line by line with python
import gzip
with gzip.open('input.gz','rt') as f:
for line in f:
but what if I'm going to ignore several lines anyways. Having some form of efficient search through compressed files would be nice. Thank you for making this project open source!
hi there, I would love to know what is the current hashing algos? And on automata-based fuzzy searching, will it perform better than current string search algo on paper and design? Thanks!
@happysalada, search through compressed data is an attractive feature proposition. I've been thinking about it a lot over the years, but it's not trivial for most compression types. Will keep in mind.
@0xqd, we currently implement Rabin-style hashing and fingerprinting documented here. The header file also provides some details:
I am looking into alternative algorithms as well, but want the primary hash and the rolling hash to use the same schema.
Features
Breaking naming and organizational changes
edit_distance
tolevenshtein_distance
to match HammingAny other requests?