Open Matagi1996 opened 11 months ago
LangChain works a bit differently from these other methods. As you can see in the source code here the prompts do not use the [DOCUMENTS]
tag and instead will directly give LangChain the representative documents instead:
That does indeed mean that the documentation should be updated to properly describe this phenomenon.
Does it mean that, currently, the LangChain representation model doesn't give the option to put keywords in the prompt, right?
That is correct. It should be straightforward to implement yourself considering other models do have that option.
The tutoriels for LLM topic generation use textgeneration.py or openai, thouse classes have this function to insert topics and documents into a custom prompt.
def _create_prompt(self, docs, topic, topics): keywords = ", ".join(list(zip(*topics[topic]))[0])
It seems like this function is missing from the Langchain wrapper and therefore using a langchain pipeline will not replace the prompt keywords with DOCUMENTS/TOPICS
I will write my own wrapper for now, just wanted confirmation if this is the reason topics were not inserted into my prompt or if I am missing something crucial here in comparison to the other wrapers.