Open langchain4j opened 8 months ago
Hi,
The templates for parsing outputs were developed and tested using ChatGPT, isn't?. Open LLMs, such as Llama, are very more verbose in their answers.
kind regards. Thank you for such a stunning project.
@ruizrube yes, most of the high-level features were developed with at least gpt-3.5-turbo
-level LLM in mind.
But now when we have llama3/mixtral/etc and json mode in Ollama, it should work as good for most of the use cases.
But we need to implement this PR anyway to ensure that a long tail of corner cases is handled well.
In case anyone is stuck with this, just enable the json mode of the LLM for the time being.
logitBias
if supported by the LLM