Closed TuanaCelik closed 1 year ago
+1
In addition, I'd find it helpful to:
We had this on our to-do list, but I agree it should be prioritized. Now that we are paying more attention to user experience, why don't we also implement streaming while at it? It's not super-hard and would further improve an impression of the agent's progress. Of course, it would work only for OpenAI models, but I think, it is possible to even for HF models now (we can do them later).
@tholor and @vblagoje - If we're aiming for Agents in 1.15, at least as a first step having this result["transcript"].split('---')[1]
accessible as a value like result["observation"]
or similar (I leave the naming to you) would make the tutorial explanation clean. (or result["transcript"]["observation"]?) -- just as a sidenote for something that could be implemented quicker..
This is issue can be closed. It's implemented and shown in the tutorial: https://haystack.deepset.ai/tutorials/23_answering_multihop_questions_with_agents
@vblagoje and @julian-risch - I wanted to create this feature request after using the agent for a bit. As a user I've noticed that although
restult["transcript"]
is useful, the most interesting and useful part for me to observe is what you can reach in the second part of the transctipt. Which I can (as far as I can tell) only get as follows:result["transcript"].split('---')[1]
. It works, but I think it would be nice to think of a name for this part of the transcript, and let users access it directly