At present, we ask an LLM a question and provide only the source text as context.
The llm's answer could be incorrect.
We can increase the chance that the llm's answer will be correct by adding real-world data related to the question.
If we lump questions into categories, we can map real-world context sources to question categories.
If the question asks about x, provide context related to x.
^^ This use-case is addressed by something in langchain... I think tools? or agents?
At present, we ask an LLM a question and provide only the source text as context.
The llm's answer could be incorrect.
We can increase the chance that the llm's answer will be correct by adding real-world data related to the question.
If we lump questions into categories, we can map real-world context sources to question categories. If the question asks about x, provide context related to x. ^^ This use-case is addressed by something in langchain... I think tools? or agents?