Closed gontsharuk closed 1 week ago
Any updates on this? I have the same problem
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
When determining the output of an agent action or a task, CrewAI relies on the relevant output to follow after the string "Final Answer:" in the complete LLM answer (implemented in
crewai.agents.parser
). Unfortunately, very often (say in 30% of the cases in our application) a lot of relevant information comes before "Final Answer:" (e.g. a complete markdown document) and these are then only briefly referenced in the final paragraph of the LLM answer after "Final Answer:", e.g.:In this example the value of
["output"]
in thereturn_values
attribute of the correspondingAgentFinish
object will be the text after "Final Answer:":Instead, it should contain the complete Markdown document (Comprehensive User Research Report in above example).
It looks like this behavior is largely due to the way the LLM prompts are constructed from the snippets in
crewai/translations/en.json
. I modified those locally and was able to bring down the rate of unsatisfactory LLM answers down to like 5-10%. But I feel that those prompt snippets can be improved even further to ensure the correct structure in nearly 100% of the cases.