Closed dylanlap98 closed 5 days ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Hello,
Thanks for getting back to me about this.
I actually fed this my own prompt already in this chain. If it’s possible, I would love to participate some way for this document / code example improvement to show this functionality. I can forward you my code of how I did this if it’s helpful.
To answer your question, yes I think documentation on this would be valuable.
Thanks, Dylan LaPierre
On Apr 29, 2024, at 10:48 AM, Bagatur @.***> wrote:
@baskaryan commented on this pull request.
in general we try to not change default prompts since those can lead to silent changes in user code behavior. would it be better in this case to update the docs to show how you can pass in your own prompt?
— Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/pull/20979#pullrequestreview-2028734604, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANLWEUXCWFAESTAXCPBNISLY7ZMUNAVCNFSM6AAAAABG4ULSPGVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDAMRYG4ZTINRQGQ. You are receiving this because you authored the thread.
agree with bagatur on not changing prompts. Would be good to have a how-to guide on how to pass in a custom prompt, would go here: https://github.com/langchain-ai/langchain/tree/master/docs/docs/how_to
going to close this, but would love to see that documentation!
Fix for prompt_template where chain_filter_prompt can return a wordy response which produces multiple answers in model output, causing ValueError. In many cases you want model running chain_filter_prompt to be lightweight and this prompt injection helps accurately produce a more stable response.
THANK YOU LANGCHAIN TEAM!
--> Note to myself: If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.