Open EdoMon93 opened 4 days ago
The react agent uses a system prompt, and some HF models probably don't like system prompts? I guess the fix would be that the class should merge system prompts into the first user message?
Hey @EdoMon93! 👋 I'm here to help you out with any bugs, questions, or contributions you have. Let's squash those bugs together!
To resolve the NotImplementedError
when using HuggingFaceInferenceAPI
with ReActAgent
in version 0.12, you need to ensure that the chat
method is called with a sequence of ChatMessage
objects rather than a single string. The chat
method expects a list of messages, and the error indicates that the messages must be of odd length.
Here's how you can format the input correctly:
from llama_index.llms.huggingface import HuggingFaceInferenceAPI
from llama_index.core.prompts import ChatMessage, MessageRole
# Create a list of ChatMessage objects
messages = [
ChatMessage(role=MessageRole.USER, content="What is the meaning of life?"),
# Add more messages as needed
]
# Initialize the HuggingFaceInferenceAPI
llm = HuggingFaceInferenceAPI(
model_name="meta-llama/Llama-3.2-3B-Instruct",
token=hf_token
)
# Call the chat method with the list of messages
response = llm.chat(messages=messages)
print(response.message.content)
Ensure that the messages
variable is a list of ChatMessage
objects, each with a role
and content
. This should resolve the NotImplementedError
by providing the expected input format. Additionally, make sure that the sequence of messages is of odd length, as required by the chat_messages_to_conversational_kwargs
function [1][2].
To continue talking to Dosu, mention @dosu.
The react agent uses a system prompt, and some HF models probably don't like system prompts? I guess the fix would be that the class should merge system prompts into the first user message?
@logan-markewich looking at the prompt format for llama3 provided by meta in this case: Llama 3 Model Card
it looks to me that it supports system promts, is the agent prompt something i can customize to make it work? Any insight to lead me in the right direction is appreciated
Bug Description
When trying to create a ReActAgent.from_tools using HuggingFaceInferenceAPI as llm, NotImplementedError is raised. Looks to me like the base chat method expects a list messages, but when used through agents expects a string.
Version
0.12
Steps to Reproduce
hf_token = os.getenv('HF_ACCESS_TOKEN') agent_llm = HuggingFaceInferenceAPI( model_name="meta-llama/Llama-3.2-3B-Instruct", token=hf_token ) agent = ReActAgent.from_tools([multiply_tool, add_tool, query_engine_tool], llm=agent_llm, verbose=True) response = agent.chat("anything")
Relevant Logs/Tracbacks