ekzhu / datasketch

MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble and HNSW
https://ekzhu.github.io/datasketch
MIT License
2.51k stars 294 forks source link

Forever growing index #219

Open surkova opened 1 year ago

surkova commented 1 year ago

We have a use case when we have an endless stream of MinHashes which we continuously compare against MinHashes we have seen before. If we haven't seen it, we add it to the index. We are using Redis as our backend and from time to time we need to switch instances because they reach 32Gb in size and cannot grow more (today this takes about 4 months for us, but we get more data day by day). For our use case it would be ideal if we could specify a last_seen key for a MinHash to implement an eviction policy, but for all I understand this is not possible? Or is it?

ekzhu commented 1 year ago

That's a great scenario. The library currently doesn't handle eviction of keys. Is it possible for you to implement it around the library using the delete function of the index? I know it would be great to utilize redis' EXPIRE keys but not sure how to implement that to play well with the LSH index itself.

surkova commented 1 year ago

Thanks for a swift reply. In order to delete something, we need to know when it was added to index, so the only way I see is to alter the key used to add MinHash to LSH to contain a timestamp, and as we work with the data we would be constantly deleting and adding back to the index with the updated key. Not really optimal and easy to work with solution.

ekzhu commented 1 year ago

What is the typical window, in terms of number of minhashes? Is there a way to time-partition the data stream so you can expire partitions as they age over time.

dexterfichuk commented 1 year ago

It's always increasing but we would expire based on when we last see a value. We're using the LSH as a clustering mechanism right now, so we do a search, and if we get anything within the similarity score, then the minhash belongs to that cluster. If we don't have any values map to that cluster in x many days we would like to purge that cluster.

We're looking at doing our own datastore build on redis sorted sets to keep tabs on when we last see clusters, but this would be great if it could be coded into the existing structure, but looking at the code i see the difficulties with it