Closed ernOho closed 3 months ago
Seems to be a general issue.
Workaround?: https://k33g.hashnode.dev/prompts-and-chains-with-ollama-and-langchain
Seems to be a general issue.
Workaround?: https://k33g.hashnode.dev/prompts-and-chains-with-ollama-and-langchain
We can try it with the linked workaround. There is already a similar implementation in services/error_finders.py
in the SchemaChainWrapper
class. Instead of returning a pydantic model, it return a dict. So this would mean we have to handle it differently in the evaluator class. Can you first try to set the fields in SummaryEvaluationItem
to Optional and see what happens? @ernOho
After setting fields to Optional, the chain.invoke(...) call in SummaryEvaluator, all fields are returned as None
(which was to be expected). Thoughts @Bruno-val-bus ?
Ill look into the workaround
@Bruno-val-bus, problem solved, per you suggested. After checking into the docs, using StructuredOutputParser makes sense for less powerfull models (e.g. llama3:8b) while pydandic parser makes sense for more powerfull ones.
Should investigate using StructuredOutputParser vs Pydantic parser for llama3:70b different sizes.
Or would it make sense to stick to the generic StructuredOutputParser so we don't have to duplicate outputs, as our required outputs are not very complex? ```
Linked it to the relevant feature branch.
Currently not working:
Error Message when running local Ollama lama2 model: