Open crslen opened 1 month ago
Hi @crslen
The provided snippet works for me.
model = ChatNVIDIA(model=llm,
temperature=temp,
top_p=top_p,
max_tokens=token,)
chain = (
prompt
| model
| StrOutputParser()
)
rag_chain = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
).assign(answer=chain)
print(rag_chain.invoke("where did harrison work?"))
{'context': [Document(page_content='harrison worked at kensho')], 'question': 'where did harrison work?', 'answer': 'Harrison worked at Kensho.'}
NOTE: NemoEmbedding should not be used, please refer NVIDIAEmbeddings
Please provide more information if the issue still persists
When using the code provided in the documentation - langchain docs, the expected response does not return. When I change the NVIDIAChat class to OllamaLLM to see if I get the same response, I get the correct response.
ChatNVIDIA class response: [Document(page_content='harrison worked at kensho')] Harrison is a common name, and without additional context, it's not possible to determine exactly where a specific Harrison has worked. If you could please provide more details, such as the last name or the industry, that would help narrow down the search.
OllamaLLM class response: [Document(page_content='harrison worked at kensho')] Based on the information provided in the context, Harrison worked at Kensho.
It appears to not consider the "Document" when retrieving the data.