Closed Travis-Barton closed 2 months ago
My solution was to make my own parser:
def pydantic_parser(output, pydantic_object):
# If no function was invoked, return to user
parser_json = JsonOutputToolsParser()
json_output = parser_json.parse_result(output)
if json_output:
return pydantic_object(**json_output[0]['args'])
else:
logging.warning(f"Could not parse output with JsonOutputToolsParser. Trying simple parser.")
output = output[0]
output = output.dict()
try:
return pydantic_object(**json.loads(output.get('text')))
except json.JSONDecodeError:
logging.warning(f"Could not parse output {output} with simple EVAL parser. Trying trimmed parser.")
try:
trimmed_result = output.get('text').split('}')[0] + '}'
return pydantic_object(**json.loads(trimmed_result))
except json.JSONDecodeError:
logging.error(f"Could not parse output: {trimmed_result} with trimmed EVAL parser. Trying an LLM solution.")
# If all else fails, try another LLM call to fix it
try:
return simple_pydantic_parser(output.get('text'), pydantic_object)
except Exception as e:
raise Exception(f"Could not parse output: {output.get('text')} with any parser. Error: {e}")
def simple_pydantic_parser(output, pydantic_object):
model = ChatOpenAI(temperature=0)
parser = PydanticOutputParser(pydantic_object=pydantic_object)
prompt = PromptTemplate(
template="You are a reformat tool. Your job is to fix wrongly formatted documents using your tool. "
"reformat this query into the proper format.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={
"format_instructions": parser.get_format_instructions()},
)
chain = prompt | model | parser
return chain.invoke({"query": output})
Checked other resources
Example Code
If I use my fact checker:
It returns an error 50% of the time due to the fact that the return from the ChatModel sometimes uses the tool and sometimes doesn't. I'm afraid I cant share my prompt, but its a pretty simply system and user prompt that makes no mention of how it should be structured as an output.
Here are two examples of returns from the same code:
There is no difference between these two runs. I simply called
chain.invoke{..}
twice.Is there a way to force the ChatModel to use the bound tool?
Error Message and Stack Trace (if applicable)
No response
Description
If I call the
invoke
function twice on a Pydantic tool boundChatModel
It alternates between using the tool to return a JSON object and returning raw text.System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found: