Closed WenJett closed 3 days ago
I think this is a problem with the small LLM itself.
It can't force
the tool to be called, resulting in sometimes output that OllamaFunctions cannot parse correctly.
LLM sometimes outputs some incorrect structure content causing parsing failure.
I minified your code to just use OllamaFunctions and deliberately entered some weird values to reproduce this error as best as possible.
from langchain_core.pydantic_v1 import BaseModel
from langchain_experimental.llms.ollama_functions import OllamaFunctions
model = OllamaFunctions(model="phi3", format="json")
class CompleteOrEscalate(BaseModel):
"""A tool to mark the current task as completed and/or to escalate control of the dialog to the main assistant,
who can re-route the dialog based on the user's needs."""
cancel: bool = True
reason: str
class Config:
schema_extra = {
"example": {
"cancel": True,
"reason": "User changed their mind about the current task.",
},
"example 2": {
"cancel": True,
"reason": "I have fully completed the task.",
},
"example 3": {
"cancel": False,
"reason": "I need to have additional information from user to search.",
},
}
model = model.bind_tools(tools=[CompleteOrEscalate])
for _ in range(10):
print(model.invoke("sftgstrew5t6436fgvhbfdat")) # It doesn't always error, just run it until it does.
So I think this may not be directly related to the operation of LangGraph.
@gbaian10 seems like LLM is the issue at the core but I would assume Llama 70b to suffice for this kind of prompting. I did try to include in the prompt template to follow a certain output format, which I assume the default to be 'content="" id="" ' (Not too sure on this). It is still quite reluctant to follow and produces different key such as name_of_event, class, grade which were part of my query.
I attempted with another tool and it produced an error message: ValueError: Failed to parse a response from phi3:14b-medium-4k-instruct-q8_0 output: { "name_of_event": "", "class": "", "grade": "" }
Even with Llama 70b, it also produced a similar error message: ValueError: Failed to parse a response from llama3:70b-instruct output: { "name": "conversational_ai" }
Closing since the issue is caused by the underlying LLM
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
I am trying to implement OllamaFunctions into the chatbot example part 4 shown in langGraph documentation. However, I face into an issue as when a tool is called, it arise this error of "Failed to parse a response from model" although the LLM have already retrieved the necessary information. The structure of the code and functions are all copied over from the example provided in documentation, with the exception of using OllamaFunctions instead of OpenAI model (which i have tried and it works without issue).
I have always tried with changing the Ollama model such as llama and phi3, but it still brings about the same error.
EDIT: I forgot to mention that the issue typically arises when I specify the initial chat to route to search_agent and use one of its tool (which is the online_query() in this case), and it will provide the error. However, if I were to use the initial primary agent and it's tool, it would not cause an error and is able to output normally.
System Info
platform mac python version 3.12.4