langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.26k stars 14.73k forks source link

"Human: " added to the prompt. #24525

Open eruniyule opened 1 month ago

eruniyule commented 1 month ago

Checked other resources

Example Code

from langchain_openai import ChatOpenAI from langchain_core.prompts import PromptTemplate from langchain.globals import set_debug

set_debug(True) prompt = PromptTemplate(template="user:{text}", input_variables=["text"]) model = ChatOpenAI(model="gpt-4o-mini") chain = prompt | model chain.invoke({"text": "hello"})

Error Message and Stack Trace (if applicable)

[llm/start] [chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: user:hello" ] }

Description

Issue 1: Even when using custom prompt, "Human: " is added to all of my prompts, which have been messing up my outputs. Issue 2 (possible, unverfied): This has me thinking that "\n AI:" is added to the prompt, which is in line with how my llm are reacting. For example, if I end the prompt with "\nSummary:\n" sometimes the AI would repeat "summary" unless explicitly told not to.

System Info

langchain==0.2.10 langchain-aws==0.1.6 langchain-community==0.2.5 langchain-core==0.2.22 langchain-experimental==0.0.61 langchain-google-genai==1.0.8 langchain-openai==0.1.8 langchain-text-splitters==0.2.1 langchain-upstage==0.1.6 langchain-weaviate==0.0.2

eyurtsev commented 1 month ago

Use a ChatPromptTemplate if you're working with a chat model

prompt = ChatPromptTemplate.from_messages(
  [
    ('system', 'Your name is bob the assistant.'),
    ('user', '{input}')
  ]
)
model = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | model
chain.invoke({"input": "hi!"})
eruniyule commented 1 month ago

The thing is, I don't want ANY prefix to my messages, but langchain seems to attach a prefix to every single option. Is there a way to get rid of this?