Open lvxinliang opened 3 days ago
I found a similar open issue and discussion that might be relevant to your problem:
To resolve the issue of different pages having the same result when querying multiple documents in the knowledge base using Dify version 0.6.16, you should review and configure the retrieval strategy appropriately. Specifically, you need to ensure that the correct recall strategy is being used.
Dify offers two recall strategies:
Create Knowledge Base & Upload Documents:
Text Preprocessing and Cleaning:
Indexing Mode:
Retrieval Settings:
In applications that utilize multiple knowledge bases, it is essential to configure the retrieval mode to enhance the precision of retrieved content. To set the retrieval mode for the knowledge bases, navigate to Context -- Retrieval Settings -- Rerank Setting.
The retriever scans all knowledge bases linked to the application for text content relevant to the user's question. The results are then consolidated. Below is the technical flowchart for the Multi-path Retrieval mode:
This method simultaneously queries all knowledge bases connected in "Context", seeking relevant text chunks across multiple knowledge bases, collecting all content that aligns with the user's question, and ultimately applying the Rerank strategy to identify the most appropriate content to respond to the user. This retrieval approach offers more comprehensive and accurate results by leveraging multiple knowledge bases simultaneously.
For instance, in application A, with three knowledge bases K1, K2, and K3. When a user sends a question, multiple relevant pieces of content will be retrieved and combined from these knowledge bases. To ensure the most pertinent content is identified, the Rerank strategy is employed to find the content that best relates to the user's query, enhancing the precision and reliability of the results.
In practical Q&A scenarios, the sources of content and retrieval methods for each knowledge base may differ. To manage the mixed content returned from retrieval, the Rerank strategy acts as a refined sorting mechanism. It ensures that the candidate content aligns well with the user's question, optimizing the ranking of results across multiple knowledge bases to identify the most suitable content, thereby improving answer quality and overall user experience.
Considering the costs associated with using Rerank and the needs of the business, the multi-path retrieval mode provides two Rerank settings:
This setting uses internal scoring mechanisms and does not require an external Rerank model, thus avoiding any additional processing costs. You can select the most appropriate content matching strategy by adjusting the weight ratio sliders for semantics or keywords.
Semantic Value of 1
This mode activates semantic retrieval only. By utilizing the Embedding model, the search depth can be enhanced even if the exact words from the query do not appear in the knowledge base, as it calculates vector distances to return the relevant content. Furthermore, when dealing with multilingual content, semantic retrieval can capture meanings across different languages, yielding more accurate cross-language search results.
Keyword Value of 1
This mode activates keyword retrieval only. It matches the user's input text against the full text of the knowledge base, making it ideal for scenarios where the user knows the exact information or terminology. This method is resource-efficient, making it suitable for quickly retrieving information from large document repositories.
Custom Keyword and Semantic Weights
In addition to enabling only semantic or keyword retrieval modes, we offer flexible custom Weight Score. You can determine the best weight ratio for your business scenario by continuously adjusting the weights of both.
The Rerank model is an external scoring system that calculates the relevance score between the user's question and each candidate document provided, improving the results of semantic ranking and returning a list of documents sorted by relevance from high to low.
While this method incurs some additional costs, it is more adept at handling complex knowledge base content, such as content that combines semantic queries and keyword matches, or cases involving multilingual returned content.
Click here to learn more about the Re-ranking.
Dify currently supports multiple Rerank models. To use external Rerank models, you'll need to provide an API Key. Enter the API Key for the Rerank model (such as Cohere, Jina, etc.) on the "Model Provider" page.
TopK
This parameter filters the text segments that are most similar to the user's question. The system dynamically adjusts the number of segments based on the context window size of the selected model. A higher value results in more text segments being recalled.
Score Threshold
This parameter establishes the similarity threshold for filtering text segments. Only those segments with a vector retrieval similarity score exceeding the set threshold will be recalled. A higher threshold value results in fewer texts being recalled, but those recalled are likely to be more relevant. Adjust this parameter based on your specific needs for precision versus recall.
The multi-recall mode can achieve higher quality recall results when retrieving from multiple knowledge bases; therefore, it is recommended to set the recall mode to multi-recall.
By ensuring the correct configuration of these settings, you should be able to resolve the issue of different pages having the same result when querying multiple documents [1][2][3][4].
To continue talking to Dosu, mention @dosu.
The new version has been resolved: https://github.com/langgenius/dify/commit/2fe2e350ce55c85cf63d94fd3f5596bd781f7f17
Please close the issue if this is resolved.
Self Checks
Dify version
0.6.16
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
When multiple documents are upload at the same time, confusion occurs when the Document is queried
✔️ Expected Behavior
There are no duplicate records for different pages
❌ Actual Behavior
Different pages have the same result