Python library for the instruction and reliable validation of structured outputs (JSON) of Large Language Models (LLMs) with Ollama and Pydantic. -> Deterministic work with LLMs.
MIT License
65
stars
3
forks
source link
Add reasoning capabilities to ollama-instructor #3
Allow reasoning when „format“ = „“
In this case the system prompt will be (in any case) have a instruction for the LLM to respond the JSON in a code Block which starts with ‘‘‘ and ends with ‘‘‘.
What’s needed?
– Two kind of system prompts (old and new) + additional prompt when user comes with own system prompt but chooses format = „“
– new method to extract the JSON from the response (code block)
include in retry feature if code block is missing
– Maybe additional error guidance prompt with only requesting the false responses (if multi BaseModels were provided; only possible when having reasoning capabilities active [format=''])
– Provide the raw response within the response object (should already be the case)
Why?
– enhance the quality of the response by allowing the LLM to reason
– make a chat-like experience possible
Allow reasoning when „format“ = „“ In this case the system prompt will be (in any case) have a instruction for the LLM to respond the JSON in a code Block which starts with ‘‘‘ and ends with ‘‘‘.
What’s needed? – Two kind of system prompts (old and new) + additional prompt when user comes with own system prompt but chooses format = „“ – new method to extract the JSON from the response (code block)
Why? – enhance the quality of the response by allowing the LLM to reason – make a chat-like experience possible