langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.98k stars 14.93k forks source link

"Could not parse LLM output" when using QianfanChatEndpoint in agent. #15199

Closed ET-yzk closed 6 months ago

ET-yzk commented 9 months ago

System Info

langchian=0.0.352 qianfan=0.2.4

When I tried the usage of agent in this video, I changed the model in it from ChatGpt-3.5-turbo to ERNIE-Bot, and the output of agent showed the following error:

> Entering new AgentExecutor chain...
Could not parse LLM output: xxxxxxxxx
Observation: Invalid or incomplete response
Thought: Could not parse LLM output: xxxxx
Observation: Invalid or incomplete response
...

And, ERNIE-Bot can't call (llm-math) tool correctly.

I wonder if the problem is a lack of capability in the qianfan model itself, or if there is a problem in the qianfan code. Or is there something wrong with my usage or other issues?

Who can help?

@danielhjz

Information

Related Components

Reproduction

my code

llm = QianfanChatEndpoint(
    temperature=0.000001,
    model='ERNIE-Bot'
)
tools = load_tools(
    ["llm-math", "wikipedia"],
    llm=llm
)
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    handle_parsing_errors=True,
    verbose=True
)
agent("300的1/4是多少?")

code in the video

# code in the video
llm = ChatOpenAI(
    temperature=0
)
tools = load_tools(
    ["llm-math", "wikipedia"],
    llm=llm
)
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    handle_parsing_errors=True,
    verbose=True
)
agent("What is the 25% of 300?")

Expected behavior

Run by ChatOpenAI(temperature=0)

> Entering new AgentExecutor chain...
Thought: We need to calculate 25% of 300, which means we need to multiply 300 by 0.25.

Action:

{ "action": "Calculator", "action_input": "300*0.25" }


Observation: Answer: 75.0
Thought:The calculator tool returned the answer 75.0, which is correct.

Final Answer: 25% of 300 is 75.0.

> Finished chain.
{'input': 'What is the 25% of 300?', 'output': '25% of 300 is 75.0.'}
dosubot[bot] commented 9 months ago

🤖

Hello @ET-yzk! I'm Dosu, your friendly neighborhood bot. I'm here to help you hunt down bugs, answer your questions, and guide you towards becoming a contributor while we wait for a human maintainer. Let's get started, shall we?

Based on the information you provided and the context from the LangChain repository, it seems like you're encountering an issue with the QianfanChatEndpoint model when trying to use it with the ERNIE-Bot/ERNIE-Bot-turbo model.

The error message "Could not parse LLM output" suggests that the output from the language model is not in the expected format, which could be due to a variety of reasons. It could be a problem with the Qianfan model itself, or it could be an issue with the way the code is written or used.

I found a similar issue in the LangChain repository: Errors about ErnieBotChat using agent. The solution provided in this issue was to use the ErnieBotChat model instead of QianfanChatEndpoint. Here is the suggested solution:

model = ErnieBotChat(model="ERNIE-Bot")

tools = load_tools(["llm-math", "wikipedia"], llm=model)

agent = initialize_agent(
    tools,
    model,
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    handle_parsing_errors=True)

# Define the question
question = "300的25%是多少?"

# Define the thought process
thought = "I need to calculate 25% of 300."

# Define the action
action = {
    "action": "Calculator",
    "action_input": "0.25 * 300"
}

# Execute the action and observe the result
observation = agent(action)
print(observation)

# Define the final answer
final_answer = observation

print(final_answer)

This code example is structured to meet the requirements of the agent initialization process, specifying a JSON blob for the action and action_input, and following the expected sequence of steps.

However, if you still want to use the QianfanChatEndpoint model, you might need to ensure that the output from the ERNIE-Bot/ERNIE-Bot-turbo model is in the correct format expected by the QianfanChatEndpoint model. The QianfanChatEndpoint model in the LangChain framework is designed to interact with Baidu's Qianfan chat models. It provides a way to send and receive messages to and from the Qianfan models, and to manage the interaction with the Qianfan API. You can find more details about the QianfanChatEndpoint model in the LangChain repository.

I hope this helps! If you have any other questions or need further clarification, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ET-yzk commented 9 months ago

BTW, I found the qianfan official cookbook usage of agents in bce-qianfan-sdk and have the following questions after trying it:

  1. Is it mandatory to use this method for current use of the qianfan model?

    agent = QianfanSingleActionAgent.from_system_prompt(tools, qianfan_chat_model)
    executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
  2. Can langchain built-in tools be used directly like this?

    tools = load_tools(
        ["llm-math", "wikipedia"],
        llm=llm
    )

    If not, is it just assumed that these tools descriptions are in English and don't work as well on the qianfan model?

  3. Is this done through the functions parameter in the interface?

  4. Is QianfanChatEndpoint(model="ERNIE-Bot") networked by default? (This is because when I tried the official docs I found that sometimes the weather results seem to be in real time)

Looking forward to your answers! @danielhjz