I think the benchmark result is not very easy to reproduce. Like scann benchmark, https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors#4-bit-pq-comparison-with-scann, it seems that the benchmark code is in bench_all_ivf/cmp_with_scann.py, but we could not directly generate the figs by running this script since there are no codes related to fig drawing, in addition there are lots of hyper parameter combination so that it is not trivial to select the 'dots' in the benchmark fig, I wonder if there are some code I was missing or it is not included at all, could anyone elaborate this ?
Summary
I think the benchmark result is not very easy to reproduce. Like scann benchmark, https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors#4-bit-pq-comparison-with-scann, it seems that the benchmark code is in
bench_all_ivf/cmp_with_scann.py
, but we could not directly generate the figs by running this script since there are no codes related to fig drawing, in addition there are lots of hyper parameter combination so that it is not trivial to select the 'dots' in the benchmark fig, I wonder if there are some code I was missing or it is not included at all, could anyone elaborate this ?