Open harishankar-gopalan opened 2 years ago
This is a good starting point to answer your questions: https://github.com/microsoft/DiskANN/tree/main/workflows The algorithms are described in the papers linked in README
Hi Team,
I am interested in DiskANN as a concept. However I am not able to find any design decision discussions for major parts of DiskANN. Can any one point me to it ?
I am more interested in answers to questions like the below:
- Does the whole vector index be present in the memory to query ? The "Index" class needs to be in memory. https://github.com/microsoft/DiskANN/blob/main/workflows/in_memory_index.md The pq_flash_index does not need to be in memory. https://github.com/microsoft/DiskANN/blob/main/workflows/SSD_index.md
- How are updates handled ? In the sense if I have a scenario where updates are also frequent will DiskANN be a better solution ? You can do thousands of updates per second: https://github.com/microsoft/DiskANN/blob/main/workflows/dynamic_index.md
- How are deletions handled ? Is it tombstoned with a flag or is the graph reorganized (if the index is graph based) ? They are tombstoned and then when the consolidate_delete function is called, the graph is edited to such a way that recall does not drop. See https://arxiv.org/abs/2105.09613 for the logic and measurements.
- How are the performance benchmarks in comparison with other popular ANN's in terms of memory footprint, time to build the graph, and time to query ? In terms of memory footprint, the SSD based index is smaller AFAIK. Time to build is higher than FAISS IVF, on par with HNSW. Query speeds at high recall targets are very good. In memory index is at least as accurate and fast, and in many cases better, than HNSW on datasets we have tried.
Hi @harsha-simhadri thanks for thebdetauled response. I did go through the DiskANN and FreshDisk ANN paper. They are really thorough and detailed. I have few questions that I am still consolidating. I hope I can put them across in this thread it self.
Couple of questions I have after reading the papers are as below:
The Vamana algorithm uses PQ based vectors and also the full precision vectors that are stored in the SSD. My understanding is that PQ vectors is used to first retrieve the candidate nodes during inserts and query and then the the full vectors are used for kind of ordering the results. Is that correct ?
In the fresh DiskANN paper under DeleteConsolidation in page 4.3 I am quoting the below:
This operation is trivially parallelized using prefix sums to consolidate the vertex list, and a parallel map operation to locally update the graph around the deleted nodes.
Can some more light be thrown on what we mean by "prefix sums to consolidate the vertex list" ?
Hi @harsha-simhadri thanks for your quick responses.
- Which workflow are you referring to?
The below part(s) (last paragraph) from section 3.1 of the initial DiskANN paper:
Our next and natural idea to address the second question is to store compressed vectors ̃xpfor > every database point p∈P in main memory, along with storing the graph on the SSD. We use a popular compression scheme known as Product Quantization[17]4, which encodes the data and query points into shortcodes(e.g.,32bytes per data point) that can be used to efficiently obtain approximatedistancesd( ̃xp,xq)at query time in Algorithm 1. We would like to remark thatVamanausesfull-precision coordinateswhen building the graph index, and hence is able to efficiently guide the searchtowards the right region of the graph, although we use only the compressed data at search time.
Section 3.2:
We store the compressed vectors of all the data points in memory, and store the graph along with the full-precision vectors on the SSD.
Section 3.3:
Distance calculations to guide the best vertices (and neighborhoods) to read from disk can be > done using the compressed vectors
2. Prefix sums to renumber vertices minus deletions.
Ok, I understand that the deleted ids need to be removed. But I dont understand what exactly we mean by "prefix sums" in this context. Apologies if it is a very naive question.
3. If merge fails, just redo?
In this case wont there be an inconsistency ? Or may be not because we are doing the merge onto an Intermediate LTI and not doing in place update on the existing LTI ? Is that the rationale ? From your response my understanding is, the Intermdiate LTI will be squelched and the StreamingMerge will be restarted again from the beginning if there is any failure during the merge operation.
Hi Team,
I am interested in DiskANN as a concept. However I am not able to find any design decision discussions for major parts of DiskANN. Can any one point me to it ?
I am more interested in answers to questions like the below: