Open rpm-arch opened 3 weeks ago
I don't think we can provide support for this at a general level. Does the same snippet of code work with llama3 and fails with a loop for llama3.1? Do you have a full trace of the outputs? What other things or changes have you tried?
I am attempting to use Llama3.1 with the following ReAct agent template from Langchain:
https://python.langchain.com/v0.1/docs/modules/agents/agent_types/react/
Here is the code:
This results in looping use of the tool and finally an output parsing error:
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output:
With handle_parsing_errors=True:
The same code works if I use other models such as mistral-nemo or qwen2.
Should this work, or can you share an example ReAct prompt that Llama3.1 is able to use?
Here is the standard prompt I am trying:
Thanks