Open Hillier98 opened 1 week ago
Hello, I also have this question. And when I look at the source code of IndexLSH, it seems that it doesn't use a random hyperplane?
Hi @Phoenix0726! About the random hyperplane, they seem to use a (n_hash_tables x hash_code_size) random matrix that they call "rotation matrix". They fix the seed here. The training still puzzles me -- would be great to know more about this training and the pre-processing that it calls.
Btw, given that we seem to be looking at the same implementation, have you found the point in IndexLSH where they set the code size to "ceil(d / 8)" as suggested here?
The statement in the demo isn't wrong - LSH does not require training. It can function okay without any training, unlike indexes like PQ. However, training does tune the thresholds, so it is expected to have a positive effect on accuracy.
It depends on the constructor parameters
https://github.com/facebookresearch/faiss/blob/main/faiss/IndexLSH.h#L33
if train_thresholds is set then the thresholds to decide about 0 or 1 will be set to the median in the training set. Otherwise they are set to 0.
@mdouze @gtwang01 Thanks for your reply!
I've read that IndexLSH is based on LSH random projection. I may have missed it but I didn't see where in the code you're computing the dot-product between the dataset and the hyperplanes. Could you please point me to that? At first I thought it was this, but I'm not sure, also because this seems to be called only if rotate_data
is set, while hyperplane projection should happen by default in LSH random projection. Unless you're not implementing LSH random projection, actually? Thanks in advance for the clarifications.
(A minor question, is nbits the number of hash tables or the number of hyperplanes per hash table, i.e., the number of bits of the hash code? I assume the second one. So is it the case that the idea of considering more hash tables must be implemented at higher level, by constructing multiple indexes?)
Thank you!
Hi, could you please help me to better understand your work? I read in one of your demos that "Unlike PQ, LSH does not require training". However, I see a train function in the code of IndexLSH. My assumption was that the function was used to adjust the thresholds (i.e., the hyperplanes) given the dataset, in order to get better hash buckets. So I assumed that it's called after constructing IndexLSH and before searching for a query point. Is this not the case? Could you please let me know the purpose of the function and when it should be called when using IndexLSH, if at all? Thanks!