Closed memanu347 closed 6 months ago
π€
Hello,
The error you're encountering is due to the structure of your prompt. The LangChain framework expects a specific pattern in the conversation flow, alternating between 'Human:' and 'Assistant:' roles.
In your current prompt template, you have a large block of text under 'Human:', but there's no corresponding 'Assistant:' before the next 'Human:' appears. This is likely causing the ValueError you're seeing.
Here's an example of how you might structure your prompt:
prompt_template = """
Human:
You are a helpful, respectful, and honest assistant, dedicated to providing valuable and accurate information.
Assistant:
Understood. I will provide information based on the context given, without relying on prior knowledge.
Human:
If you don't see answer in the context just Reply "Sorry , the answer is not in the context so I don't know".
Assistant:
Noted. I will respond with "don't know" if the information is not available in the context.
Human:
Now read this context and answer the question.
{context}
Assistant:
Based on the provided context above and information from the retriever source, I will provide a detailed answer to the below question
{question}
"""
In this revised template, each 'Human:' message is followed by an 'Assistant:' message, ensuring the alternation that the LangChain framework expects.
Please try this revised template and let me know if you continue to encounter issues.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I am still facing the same issue , could it be because am using below code after LLM object from bedrockchat
condense_qa_template = """
Given the following conversation and a follow up question, rephrase the follow up question
to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
standalone_question_prompt = PromptTemplate.from_template(condense_qa_template)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
condense_question_prompt=standalone_question_prompt,
return_source_documents=True,
combine_docs_chain_kwargs={"prompt":PROMPT},
)
@memanu347 I encountered the same error when using ConversationBufferMemory to maintain a history of conversations. Adjusting the ai_prefix settings resolved the issue. I hope this helps.
conversation = ConversationChain(
prompt=prompt,
llm=llm,
memory=ConversationBufferMemory(
# Anthropic requires the prefix to be "Assistant":
ai_prefix="Assistant"
)
)
@Hayato-Kosugi Can you help me in updating my below code to make it work , as this was working totally fine before 12 PM yesterday
prompt_template = """
Human:
You are a helpful, respectful, and honest assistant, dedicated to providing valuable and accurate information.
Assistant:
Understood. I will provide information based on the context given, without relying on prior knowledge.
Human:
If you don't see answer in the context just Reply "Sorry , the answer is not in the context so I don't know".
Assistant:
Noted. I will respond with "don't know" if the information is not available in the context.
Human:
Now read this context and answer the question.
{context}
Assistant:
Based on the provided context above and information from the retriever source, I will provide a detailed answer to the below question
{question}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"],
)
modelId = 'anthropic.claude-v2'
# Langchain chain for invoking SageMaker Endpoint
llm = BedrockChat(
model_id=modelId,
client=bedrock_client,
model_kwargs={"max_tokens_to_sample": 1000}
)
def get_chat_history(inputs,result) -> str:
res = []
res.append((inputs, result["answer"]))
print(result['answer'])
if 'source_documents' in result:
print('Sources:')
for d in result['source_documents']:
print(d.metadata['source'])
return result['answer']
condense_qa_template = """
Given the following conversation and a follow up question, rephrase the follow up question
to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
standalone_question_prompt = PromptTemplate.from_template(condense_qa_template)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
condense_question_prompt=standalone_question_prompt,
return_source_documents=True,
combine_docs_chain_kwargs={"prompt":PROMPT},
)
@sbusso @Hayato-Kosugi @jarib @zeke I think the bug is in below code where it is trying to see the count of Human Prompt at line 45 in bedrock,py where it is count % == 0 which in my opinion should be count % == 1 as it is in the ASSISTANT PROMPT section.
# track alternation
for i in range(len(input_text)):
if input_text[i : i + len(HUMAN_PROMPT)] == HUMAN_PROMPT:
***if count % 2 == 0:***
count += 1
else:
raise ValueError(ALTERNATION_ERROR)
if input_text[i : i + len(ASSISTANT_PROMPT)] == ASSISTANT_PROMPT:
if count % 2 == 1:
count += 1
else:
raise ValueError(ALTERNATION_ERROR)
I am still facing the same issue , could it be because am using below code after LLM object from bedrockchat
condense_qa_template = """ Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. Chat History: {chat_history} Follow Up Input: {question} Standalone question:""" standalone_question_prompt = PromptTemplate.from_template(condense_qa_template) qa = ConversationalRetrievalChain.from_llm( llm=llm, retriever=retriever, condense_question_prompt=standalone_question_prompt, return_source_documents=True, combine_docs_chain_kwargs={"prompt":PROMPT}, )
The Bedrock API wants to see alternating Human:/ Assistant: prompts for Claudev2. Something is probably appending "Human:" to the beginning of your condense_qa_template. By putting the chat history in the middle , you end up with Human/Human/Assistant/Assistant. Try this:
condense_qa_template = """{chat_history}
Given the above conversation and a follow up question below, rephrase the follow up question
to be a standalone question.
Follow Up Question: {question}
I've had the same issue with the latest version of Langchain and ClaudeV2, something has changed in the way it processes history. A single prompt worked fine, but the second failed with the Error: Prompt must alternate between ' Human:' and ' Assistant:'.
error.
Using OpenAI as the basemodel worked fine.
I fixed it by changing the prompt to this:
claude_prompt = PromptTemplate.from_template("""
The following is a friendly conversation between a human and an AI.
The AI is talkative and provides lots of specific details from its context. If the AI does not know
the answer to a question, it truthfully says it does not know.
Current conversation:
<conversation_history>
{history}
</conversation_history>
Human:
<human_reply>
{input}
</human_reply>
Assistant:
""")
# Access Bedrock
client = boto3.client("bedrock-runtime")
# Underlying model
cl_llm = Bedrock(
model_id="anthropic.claude-v2",
client=client,
model_kwargs={"max_tokens_to_sample": 1000},
)
# Keep track of questions & Answers
memory = ConversationBufferMemory(ai_prefix="Assistant")
# Build the chain
conversation = ConversationChain(
llm=cl_llm,
verbose=True,
memory=memory,
prompt=claude_prompt
)
@theNicelander this works!!
Just trying to implement this into a larger system of chains. Especially if you are using ConversationalRetrievalChain and you have a question_generator_chain and condense_question_chain, would I need to add Human: and Assistant: to those prompts as well.
I have tried this logic on the ConversationalRetrievalChain, but I keep running into that same issue.
bedrock_client = boto3.client("bedrock-runtime")
modelId = "anthropic.claude-v2"
llm = BedrockChat(
model_id=modelId, client=bedrock_client, model_kwargs={"max_tokens_to_sample": 1000}
)
db = Chroma(
persist_directory="./db_testing/", embedding_function=OpenAIEmbeddings(client=None)
)
memory = ConversationBufferMemory(
memory_key="chat_history",
ai_prefix="Assistant",
)
prompt_template = """
Human:
You are a helpful, respectful, and honest assistant, dedicated to providing valuable and accurate information.
Assistant:
Understood. I will provide information based on the context given, without relying on prior knowledge.
Human:
If you don't see answer in the context just Reply "Sorry , the answer is not in the context so I don't know".
Assistant:
Noted. I will respond with "don't know" if the information is not available in the context.
Human:
Now read this context and answer the question.
{context}
Assistant:
Based on the provided context above and information from the retriever source, I will provide a detailed answer to the below question
{question}
"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
)
condense_qa_template = """{chat_history}
Given the above conversation and a follow up question below, rephrase the follow up question
to be a standalone question.
Follow Up Question: {question}
Standalone question:"""
standalone_question_prompt = PromptTemplate.from_template(condense_qa_template)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=db.as_retriever(),
condense_question_prompt=standalone_question_prompt,
return_source_documents=True,
chain_type="stuff",
combine_docs_chain_kwargs={"prompt": PROMPT},
memory=memory,
)
print(qa({"question": "What is the plot?"}))
The error is:
raise ValueError(ALTERNATION_ERROR)
ValueError: Error: Prompt must alternate between '
Human:' and '
Assistant:'.
Has anyone managed to set up a ConversationalRetrievalChain with Bedrock or BedrockChat?
Folks,
Try using the BedrockChat
class instead of Bedrock
. Here is a working example.
from langchain.chains import ConversationChain
from langchain.chat_models import BedrockChat
from langchain.llms import Bedrock
from langchain.memory import ConversationBufferMemory
from langchain.prompts import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
MessagesPlaceholder,
HumanMessagePromptTemplate,
PromptTemplate
)
β
SYSTEM_PROMPT = """
The following is a friendly conversation between a human and an AI.
The AI is talkative and provides lots of specific details from its context. If the AI does not know
the answer to a question, it truthfully says it does not know.
"""β
β
def create_chain():
prompt_template = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(SYSTEM_PROMPT),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
]
)
llm = BedrockChat(model_id="anthropic.claude-v2", region_name="us-west-2")
memory = ConversationBufferMemory(return_messages=True)
llm_chain = ConversationChain(llm=llm, prompt=prompt_template, memory=memory, verbose=False)
β
return llm_chain
β
β
if __name__ == "__main__":
chain = create_chain()
print(chain.predict(input="Hi there!"))
print(chain.predict(input="What is the capital of France?"))
print(chain.predict(input="How about Canada?"))
Hey,
I am trying to use Bedrock chat and running into the same issue as described here. I am using langchain==0.0.313.
module "warning" resolves the issue
import warnings warnings.filterwarnings('ignore')
For your reference, here is an example using the LLMChain class. It works fine.
I am using langchain==0.0.339.
This example is in accordance with the format of Claudeβs prompt.
Please pay attention to template_text
and the constructor of ConversationBufferMemory
.
I hope this will be of help to you.
import boto3
import streamlit as st
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.memory import ConversationBufferMemory
from langchain.memory.chat_message_histories import StreamlitChatMessageHistory
from langchain.prompts import PromptTemplate
claude_params = {
"max_tokens_to_sample": 1000,
"temperature": 0,
"top_k": 10,
"top_p": 0.5,
"stop_sequences": ["\n\nHuman:"],
}
def main():
msgs = StreamlitChatMessageHistory(key="chat_messages")
memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=msgs,
human_prefix="\nHuman",
ai_prefix="\nAssistant",
)
if len(msgs.messages) == 0:
msgs.add_ai_message("How can I help you?")
template_text = """\
{chat_history}
Human: You are an AI chatbot having a conversation with a human.
Above is conversation history.
<human_reply> is the human's next reply.
<human_reply>
{human_input}
</human_reply>
Assistant: """
prompt_template = PromptTemplate(
input_variables=["history", "human_input"], template=template_text
)
bedrock_runtime = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1",
)
modelId = "anthropic.claude-v2"
cl_llm = Bedrock(
model_id=modelId,
client=bedrock_runtime,
model_kwargs=claude_params,
)
llm_chain = LLMChain(
llm=cl_llm,
prompt=prompt_template,
memory=memory,
verbose=True,
)
for msg in msgs.messages:
with st.chat_message(msg.type):
st.write(msg.content)
if input_text := st.chat_input():
with st.chat_message("human"):
st.markdown(input_text)
response = llm_chain.run(human_input=input_text)
with st.chat_message("ai"):
st.markdown(response)
if __name__ == "__main__":
main()
Using ChatPromptTemplate as per @3coins fixed it for me
What about the this
LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:
`from langchain_community.chat_models import BedrockChat`.
I just pushed a fix, please check it out: https://github.com/langchain-ai/langchain/pull/16968
FWIW you can fix before it's merged by extending the Bedrock model and implementing _convert_input from my PR, use that instead of the base Bedrock
System Info
while trying to use BedrockChat , it threw my error below , the updated prompt which am using after the recent changes in claude prompt is below .
Error is below raise ValueError(ALTERNATION_ERROR) ValueError: Error: Prompt must alternate between ' Human:' and ' Assistant:'.
Who can help?
No response
Information
Related Components
Reproduction
Expected behavior
I would have expected the code to run with the prompt.