Closed VirajBhatt closed 1 year ago
Hi, @VirajBhatt. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
Based on my understanding, the issue you raised is about the LangChain Summarizer appending the prompt template to the summarized response, even when the input text is not lengthy enough. This behavior is not expected, as the prompt text should be excluded.
Currently, there hasn't been any activity or updates on this issue. So, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let the LangChain team know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain project, and we appreciate your understanding as we work to manage our backlog effectively. If you have any further questions or concerns, please let us know.
System Info
Langchain version = 0.0.187 Python version = 3.9
Who can help?
Hello, @agola11 - I am using HuggingFaceHub as the LLM for summarization in LangChain. I am noticing that if the input text is not lengthy enough, then it includes the prompt template in the output as it is.
Information
Related Components
Reproduction
Sample Code :
Verbose Output :
Summarized Output : (Notice how it appends the prompt text as well)
Expected behavior
It should not include the prompt text and simply output the summarized text or if the input text is too small to summarize, might as well return the original text as it is.
Expected Output :