Closed isaacwasserman closed 3 weeks ago
@ccurme Any updates?
Thanks @isaacwasserman for the detailed writeup.
From what I can tell this is just due to how tool_choice
is specified. I've merged a fix in https://github.com/langchain-ai/langchain/pull/27202. Let me know if you continue to see issues.
Checked other resources
Example Code
This is code from this page of documentation, specifically the "structured output" section.
Error Message and Stack Trace (if applicable)
N/A
Description
Instead of giving structured output, the code prints
None
. Running withlangchain.debug = True
reveals that the LLM outputs text, rather than a function call. Therefore, the output parser is unable to give any structured output. It's worth noting that (1) this code does work when the LLM given is GPT 3.5 turbo, rather than llama, and (2) structured output does work with the following llama.cpp code:One potential problem is that the
chat_format="chatml-function-calling"
kwarg is not passed to thellama_cpp.Llama()
instance, however letting this argument reach llama-cpp results in an error regarding thetool_choice
argument which is also not passed.Debug output:
System Info
System Information
Package Information
Optional packages not installed
Other Dependencies