Open Casper-Mars opened 5 months ago
🤖
Hey there, @Casper-Mars! 👋 Great to see you back. Hope you're ready to dive into another intriguing LangChain mystery together!
The error message "Conversation roles must alternate user/assistant/user/assistant/..." suggests that the structure of the conversation being passed to the ChatHuggingFace
model is not correct. The ChatHuggingFace
model expects a list of messages where the roles of the messages alternate between user
and assistant
. This means that two consecutive messages cannot have the same role.
From the code you provided, it seems that you are using the ChatPromptTemplate
to create the prompt for the conversation. The ChatPromptTemplate
takes a list of messages as input. The messages in this list should alternate between SystemMessage
, HumanMessage
, and AIMessage
in that order. The last message in the list should be a HumanMessage
.
Here is an example of how the list of messages should look like:
messages = [
SystemMessage(content="System message content"),
AIMessage(content="AI message content"),
HumanMessage(content="Human message content"),
]
In your code, you are using a MessagesPlaceholder
with the variable name chat_history
as the second message in the list. The MessagesPlaceholder
is replaced with the actual messages from the chat history when the prompt is created. If the messages in the chat history do not alternate correctly between user
and assistant
, it could lead to the error you are seeing.
To fix this issue, you could validate the structure of the chat history before it is passed to the ChatPromptTemplate
. You could use a method like the one I suggested in the context to check that the roles of the messages in the chat history alternate correctly.
Regarding the compatibility of BaseLanguageModel
, the BaseLanguageModel
is an abstract base class that defines the interface for all language models in LangChain. The ChatHuggingFace
model is a specific implementation of this interface. The methods and properties of the BaseLanguageModel
are designed to be general enough to support a wide range of language models. However, each specific language model may have its own requirements and constraints. In this case, the ChatHuggingFace
model requires the conversation roles to alternate correctly. This is not a requirement of the BaseLanguageModel
itself, but a requirement of the specific language model you are using.
To improve the compatibility of BaseLanguageModel
, one approach could be to add more detailed documentation about the requirements and constraints of each specific language model. This would help developers understand what they need to do to use a specific language model correctly. Another approach could be to add more validation checks in the methods of the BaseLanguageModel
to catch common errors early. However, this could also make the BaseLanguageModel
more complex and harder to use.
I hope this helps! If you have any further questions, please let me know.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@Casper-Mars Hey, i'm getting the same issue using ChatHuggingFace within my Conversational retrieval chain. Did you manage to fix it?
@joe-barhouch Did you manager to fix it?
I fixed this issue by removing ChatHuggingFace(llm=llm)
and calling the invoke
directly on the HuggingFaceHub
, in my code I'm using HuggingFaceEndpoint
, but the principle might be the same.
Checked other resources
Example Code
I create a LLM: def mixtral() -> BaseLanguageModel: llm = HuggingFaceHub( repo_id="mistralai/Mixtral-8x7B-Instruct-v0.1", task="text-generation", model_kwargs={ "max_new_tokens": 16384, "top_k": 30, "temperature": 0.1, "repetition_penalty": 1.03, "max_length": 16384, }, ) return ChatHuggingFace(llm=llm)
And then use in other code: @classmethod def default_bot(cls, sys_msg: str, llm: BaseLanguageModel): h_temp = "{question}"
Init Prompt
Error Message and Stack Trace (if applicable)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 142, in warning_emitting_wrapper return wrapped(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 538, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 142, in warning_emitting_wrapper return wrapped(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 363, in call return self.invoke( ^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call response = self.generate([inputs], run_manager=run_manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate return self.llm.generate_prompt( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate self._generate_with_cache( File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache return self._generate( ^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 68, in _generate llm_input = self._to_chat_prompt(messages) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 100, in _to_chat_prompt return self.tokenizer.apply_chat_template( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1742, in apply_chat_template rendered = compiled_template.render( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/jinja2/environment.py", line 1301, in render self.environment.handle_exception() File "/opt/homebrew/lib/python3.11/site-packages/jinja2/environment.py", line 936, in handle_exception raise rewrite_traceback_stack(source=source) File "", line 1, in top-level template code File "/opt/homebrew/lib/python3.11/site-packages/jinja2/sandbox.py", line 393, in call return context.call(obj, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1776, in raise_exception raise TemplateError(message) jinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/...
Description
I want to know what the root cause of this issue is? I simply replaced llm with ChatHuggingFace from openaiGPT4. Why is there such incompatibility? Can the official team consider the compatibility of BaseLanguageModel?
System Info
❯ pip freeze | grep langchain langchain==0.1.0 langchain-community==0.0.10 langchain-core==0.1.8 langchain-experimental==0.0.28 langchain-google-genai==0.0.3 langchain-openai==0.0.2
platform: mac M1