Open mertzamir opened 1 month ago
Can I work on this?
What do you mean? 😅
@havkerboi123 you are not passing the correct input variable for the document variable in your prompt. Just do this and it should work.
postmarket_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=ChatOpenAI(model=config.QA_MODEL, temperature=config.TEMPERATURE),
chain_type="stuff",
retriever=rds.as_retriever(search_type="similarity", search_kwargs={"k": 8}),
return_source_documents=True,
verbose=True,
chain_type_kwargs = {'document_variable_name':'context','prompt':postmarket_prompt}
)
@keenborder786 thanks for your answer, that worked for me! one last question. Would adding the prompt
field to chain_type_kwargs
like you mentioned in your answer can make the next line obsolete? (next line for your reference:
postmarket_chain.combine_documents_chain.llm_chain.prompt = postmarket_prompt
Checked other resources
Example Code
Then the
postmarket_chain
is used by the tool i defined in my langchain agent asfunc=postmarket_chain.invoke
Error Message and Stack Trace (if applicable)
Description
I have a multimodel RAG system that generates answers using the texts parsed from hundreds of PDFs that are retrieved from my Redis vectorstore. And I have several chains (RetrievalQAWithSourcesChain) to find relevant contextual texts from vectorstore and append them in my chatbot llm calls. I'm having problems in correctly adding context to the system prompt. Below code throws ValueError: Missing some input keys: {'context'} .
The RetrievalQAWithSourcesChain is supposed to use the Redis retriever and append the extracted texts to the {context} I believe, but seems like it can't or there's something else i can't see.
It surprisinly works when I use double brackets around 'context' in the prompt -> {{context}}. However, when I examine the logs of the intermediate steps of langchain trying to use the agent's tools to generate an answer, my understanding is that the context is not even passed and the llm model just uses its own knowledge to give answers without using any contextual info that's supposed to be passed from vectorstore. Here are some logs below. Notice how some text data returned from vectorstore is included in summaries but then when StuffDocumentsChain passed that to llm:ChatOpenAI you see that it's not injected into the system prompt (scroll right to see), the context field still remains as {context} (it dropped the outer brackets)
Am I right in my assumption of the context is not being passed to the knowledge window correctly? How can I fix this? All the examples I see from other projects use one bracket around context when they include it in the system prompt. However I could only make the code work with double brackets and that seems like it's not injecting the context at all...
Can this be due to the index schema I used when creating the vectorstore? the schema for reference:
System Info
langchain==0.2.7 langchain-community==0.2.7 langchain-core==0.2.16 langchain-openai==0.1.15 langchain-text-splitters==0.2.2 langchainhub==0.1.20
Python 3.12.4
OS: MacOS Sonoma 14.4.1