Closed mrcmoresi closed 4 months ago
🤖
Hello,
Based on your request, you want to dynamically change the prompt in a ConversationalRetrievalChain
based on the context value, especially when the retriever gets zero documents, to ensure the model doesn't fabricate an answer.
To achieve this, you can modify the _get_docs
method in the ConversationalRetrievalChain
class. In this method, after retrieving the documents, you can check if the list of documents is empty. If it is, you can modify the prompt or question accordingly to ensure the model doesn't fabricate an answer.
Here is a modified version of the _get_docs
method:
def _get_docs(
self,
question: str,
inputs: Dict[str, Any],
*,
run_manager: CallbackManagerForChainRun,
) -> List[Document]:
"""Get docs."""
docs = self.retriever.get_relevant_documents(
question, callbacks=run_manager.get_child()
)
if not docs: # If no documents are retrieved
# Modify the question or prompt here
question = "Modified question or prompt"
# Retrieve documents again with the modified question
docs = self.retriever.get_relevant_documents(
question, callbacks=run_manager.get_child()
)
return self._reduce_tokens_below_limit(docs)
In this modification, if no documents are retrieved, the question is modified and the documents are retrieved again with the modified question. You can replace "Modified question or prompt"
with the actual modified question or prompt you want to use when no documents are retrieved.
Please note that this modification requires you to modify the LangChain codebase, which might not be ideal if you want to keep your LangChain installation up-to-date with the official releases. However, this is a workaround that you can use to achieve your goal.
I hope this helps! If you have any other questions, feel free to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot you didn't fully get what i'm trying to do, I don't want to change the question in case there's no document retrieved. What I want to do is, in case the retriever returns 0 documents, I want either send directly as output a phrase like "I don't know about that" or to set the context of the QA_PROMPT to a fix value let's say "I don't have that information, could please rephrase your question or ask a different topic".
🤖
Based on the context provided, you can achieve your goal by modifying the _get_docs
method in the ConversationalRetrievalChain
class. This method is responsible for retrieving the documents based on the question.
Here's how you can modify the _get_docs
method to set a fixed context when the retriever returns zero documents:
def _get_docs(
self,
question: str,
inputs: Dict[str, Any],
*,
run_manager: CallbackManagerForChainRun,
) -> List[Document]:
"""Get docs."""
docs = self.retriever.get_relevant_documents(
question, callbacks=run_manager.get_child()
)
# Check if no documents are returned
if not docs:
# Set a fixed context or output a certain phrase
# For example, let's set a fixed context as an empty document
docs = [Document(page_content="No relevant documents found.")]
return self._reduce_tokens_below_limit(docs)
In this modification, if the retriever returns zero documents, we create a new document with a fixed context ("No relevant documents found.") and return it. You can replace this fixed context with any context you want to set when no documents are returned by the retriever.
This modification should be done in the ConversationalRetrievalChain
class in the LangChain codebase. You can find the original _get_docs
method in the ConversationalRetrievalChain
class in the libs/langchain/langchain/chains/conversational_retrieval/base.py file.
Please note that this modification will affect all instances of the ConversationalRetrievalChain
class. If you want to apply this modification to a specific instance only, you may need to create a subclass of ConversationalRetrievalChain
and override the _get_docs
method in the subclass.
I hope this helps! If you have any further questions, feel free to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@mrcmoresi Is this really about adding "extra" info to the prompt or do you just want the chain to return a fixed message if no relevant documents are found? I think you are looking for the latter. If so, you could look at the returned source documents and 'rewrite' the response (but then you may have to update memory of the chain). I think it would be best to expose an option in the ConversationalRetrievalChain to return a fixed user-defined message if no relevant documents are found. Seems like it would be a fairly common request and one that should be supported in a straightforward manner. I can originate a PR .
hi @sudranga, thanks for your answer. yes, exactly what I want is to return a fixed message when relevant documents are not found.
I think the option you mentioned is the best alternative, just to expose an option in the ConversationalRetrievalChain.
currently what I'm doing is, I override the _get_docs method of the ConversationalRetrievalChain so when there is not document retrieved, I return a "fixed" document, saying "not document retrieved, please ask for clarification". which will be the context of the LLM to generate the answer. This is consuming a call to the LLM which is unnecessary if an answer would be returned directly.
Opened a PR: https://github.com/langchain-ai/langchain/pull/11519
Hi @mrcmoresi, did you find any solution?
Hi @pai4451 I used the new feature from the PR https://github.com/langchain-ai/langchain/pull/11519 but I'm not using async calls.
Looks like i missed it. Will add it.
@sudranga Thanks for implementing the async version. This is what I need.
Hi, @mrcmoresi,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, the issue was opened to dynamically change the prompt in a ConversationalRetrievalChain based on the context value, especially when the retriever returns zero documents. After some discussion, a PR was opened to expose an option in the ConversationalRetrievalChain to return a fixed user-defined message if no relevant documents are found. You confirmed using the new feature from the PR but mentioned not using async calls. Subsequently, the async version was implemented in a new PR.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and cooperation.
Issue you'd like to raise.
Hello, I'm trying to change dynamically a prompt, based on the value of the context in a ConversationalRetrievalChain
The idea is when the retriever gets 0 documents, to have something in the context that push the model to say I don't know and don't make up and answer. I already try by adding in the prompt "if you don't know the answer don't make it up" and didn't work.
Any idea how to add a condition in the template or control the LLM based on the output of the retriever?
Suggestion:
No response