opensearch-project / k-NN

🆕 Find the k-nearest neighbors (k-NN) for your vector data
https://opensearch.org/docs/latest/search-plugins/knn/index/
Apache License 2.0
156 stars 115 forks source link

[FEATURE] Improving Lucene Engine Query Performance by reducing number of times a single Lucene k-NN query gets executed #2115

Open navneet1v opened 1 month ago

navneet1v commented 1 month ago

Description

Currently Lucene Engine KNN queries get executed during the re-write phase and not in the Weight class. On recent deep-dive we observed that rewrite function of a query can be called multiple times in the overall search flow.

Please check this code trace on rewrite running before start of fetch phase.

  1. Transport action registering fetchphase: https://github.com/opensearch-project/OpenSearch/blob/f67ed1796749376401c5cc617eff[…]n/java/org/opensearch/action/search/SearchTransportService.java
  2. Search Context is built: https://github.com/opensearch-project/OpenSearch/blob/8148e4d295397d4f5b50b85f4dc3[…]6/server/src/main/java/org/opensearch/search/SearchService.java
  3. In building the searchcontext we parse the source again: https://github.com/opensearch-project/OpenSearch/blob/8148e4d295397d4f5b50b85f4dc3[…]6/server/src/main/java/org/opensearch/search/SearchService.java (this is already done in query phase)
  4. During the create context we do the preProcess: https://github.com/opensearch-project/OpenSearch/blob/8148e4d295397d4f5b50b85f4dc3[…]6/server/src/main/java/org/opensearch/search/SearchService.java
  5. In preprocess we do context preprocess: https://github.com/opensearch-project/OpenSearch/blob/4035db48c6963e46909f28ba8552[…]erver/src/main/java/org/opensearch/search/query/QueryPhase.java
  6. And then we do rewrite again: https://github.com/opensearch-project/OpenSearch/blob/4f97fd3d5588f9be52bee37d2c51[…]r/src/main/java/org/opensearch/search/DefaultSearchContext.java

The same was observed in the flame graphs, where when we have more than 1 shard, during fetch phase the rewrite on the query is called again. This leads to running of the Lucene engine k-NN query more than 1 and adds latency.

Flame Graph Lucene Query_2shards_1M_128D

Number of shards: 2 KNN engine: Lucene Dataset: 1M 128D sift. Tool used: OSB Docker image: opensearchstaging/opensearch:2.17.0.10284 Heap: 16GB RAM: 64GB Cores: 16 Search JFR: search.jfr.zip

Why we need Query and its rewrite in fetch phase

A quick scan of the opensearch core code in fetch phase I found use cases that might require to run the query rewrite again and then use it during fetch phase. Below are the references where query which was rewritten in the DefaultSearchContext during FetchPhase is added to FetchPhaseSearchContext and was used. Explain Sub phase: This is used to provide the explanation why a particular result is in the query. PercolateQuery Highlighting: No idea what this query type is but it does use some visitor pattern of the Query(Lucene interface) to do something. Inner Hits: This is used to get the child hits when there is a parent child relationship between fields. Not sure about this use case as it is actually doing some more funky logic on query. This needs more deep-dive

Possible Solution

Solution 1

One solution we can explore is by wrapping all the Lucene queries in another QueryClause lets say LuceneEngineKNNQuery and have a class member in this class which actual Lucene query. Now when createWeight is called we can first rewrite the query and then call the scorer on top of it. This will ensure that Lucene k-NN query is executed only once.

Sample Code:

public class LuceneEngineKNNQuery extends Query {

    Query luceneQuery; // this can be KnnFloatVectorQuery, KnnByteVectorQuery, DiversifyingChildrenByteKnnVectorQuery etc.

    @Override
    public Weight createWeight(IndexSearcher searcher, ScoreMode scoreMode, float boost) throws IOException {
        // rewrite the lucene query
        Query docAndScoreQuery = searcher.rewrite(luceneQuery);
        // now return the weight class
        return docAndScoreQuery.createWeight(searcher, scoreMode, boost);
    }

    @Override
    public String toString(String field) {
        return "";
    }

    @Override
    public void visit(QueryVisitor visitor) {

    }

    @Override
    public boolean equals(Object obj) {
        return false;
    }

    @Override
    public int hashCode() {
        return 0;
    }
}

Solution 2

Another solution we can implement by caching the SearchContext at a shard level, and when fetch phase is executed we use the same SearchContext so that we don't need to rewrite the queries. Another approach can be we can defer the rewrite and make it lazy so that only Fetch Pre-Processors that needs re-write should do the rewrite, and once it is done by a Fetch Processor then none of the other will need to run the re-write again as the query has changed now.

Pros and Cons

Both solution has its own pros and cons. Like

  1. Solution 1, solves the problem without making drastic changes in core, but may create problem if Lucene query execution changes from rewrite.
  2. Solution 2, Implementing a shard level SearchContext Cache may be tricky as it can increase the heap and what could be a blast radius is unknown. But this solution will ensure that rewrite is happening once for the query in the whole search request at Opensearch level.
jmazanec15 commented 1 month ago

Im in favor of solution (1) over solution (2). I cannot think of a major advantage of doing rewrite vs createWeight, unless there is some kind of benefit around caching - but I cannot think of any

navneet1v commented 1 month ago

I am also in favor of 1, but I was just thinking if we can fix it from Core side will that help us or not. Hence I added solution 2. Since the change in core is not even simple and might impact latency if not implemented correctly. So I think we should go with solution 1. But would like to hear more from other maintainers.

@luyuncheng , @vamshin , @heemin32

heemin32 commented 1 month ago

How about caching the faiss search result for a short time. Do we know if the query is from the same request or not using something like query UUID? Blast radius could be smaller than caching SearchContext.

navneet1v commented 1 month ago

@heemin32 this is not for faiss engine, this is for Lucene engine. Plus would you elaborate how caching will work in case of Lucene?

heemin32 commented 1 month ago

@heemin32 this is not for faiss engine, this is for Lucene engine. Plus would you elaborate how caching will work in case of Lucene?

You are saying for faiss, the query will happen only one time? I thought it could be same for faiss. Maybe my understanding is only limited to innerHit. For innerHit, the query get executed 1 + n(number of returned item) which is also true for faiss.

Hmm. But for n, because it is filtered to its single parent, I guess exact search will get hit and caching might not help here.

navneet1v commented 1 month ago

@heemin32 I didn't mention faiss anywhere and this double query execution for lucene happens because lucene query is executed during rewrite and if you see the links added in the description rewrite for queries happen for both query and fetch phase(for more than 1 shard). Hence extra latency.

This issue doesn't talk about extra latency during inner hits.

luyuncheng commented 1 month ago

I am also in favor of 1! it looks good to me. @navneet1v

I thought it could be same for faiss.

@heemin32 @navneet1v i think it would not happens with native knn query. rewrite comes from AbstractKnnVectorQuery in lucene engine and it would do rewrite in pre-process which is for reducing shard query. but we will skip this in KNNQuery

luyuncheng commented 1 month ago

Im in favor of solution (1) over solution (2). I cannot think of a major advantage of doing rewrite vs createWeight, unless there is some kind of benefit around caching - but I cannot think of any

@jmazanec15 i think the major advantage is when there is no hits in knnQuery like specific min_score it can skip the shard, but i prefer to skip rewrite for majority scenarios,

so the solution 1 is better for me. @navneet1v

navneet1v commented 1 month ago

@heemin32 @navneet1v i think it would not happens with native knn query. rewrite comes from AbstractKnnVectorQuery in lucene engine and it would do rewrite in pre-process which is for reducing shard query. but we will skip this in KNNQuery

yes thats correct. The issue which we are talking in this gh issue doesn't happen for Native engines.

navneet1v commented 1 month ago

@luyuncheng thanks for putting up the thoughts. @junqiu-lei will be working on the fix. I think we should be able fix this before 2.18 release.

heemin32 commented 17 hours ago

With option 1, we could use inheritance instead of delegation, allowing us to inherit all other methods unchanged.

public class LuceneEngineKNNQuery extends KnnFloatVectorQuery {
    @Override
    public Weight createWeight(IndexSearcher searcher, ScoreMode scoreMode, float boost) throws IOException {
        // rewrite the lucene query
        Query docAndScoreQuery = searcher.rewrite(luceneQuery);
        // now return the weight class
        return docAndScoreQuery.createWeight(searcher, scoreMode, boost);
    }
}

public class LuceneEngineKNNQuery extends KnnByteVectorQuery {...}
public class LuceneEngineKNNQuery extends DiversifyingChildrenFloatKnnVectorQuery {...}
public class LuceneEngineKNNQuery extends DiversifyingChildrenByteKnnVectorQuery {...}
navneet1v commented 16 hours ago

@heemin32 I would prefer delegation/composition here over inheritance, so that we can avoid creating new queries in Opensearch whenever Lucene adds a new query.