Open franzwilding opened 5 months ago
Yes @franzwilding we have this item on our roadmap, thanks for raising this issue and voicing your preferred solution.
@vblagoje any idea yet, when this feature will become available? We are using haystack in quite some projects now and want to know if it is worth putting more energy in our work around solution or if we can expect proper streaming out of a pipeline soon :) ?
Yes, I understand totally! The support is currently being worked on 😎
@vblagoje Any updates regarding an ETA for the feature? Thanks in advance for the heads-up
@aymbot on our immediate roadmap for Q3, starting soon 🙏
With this feature implemented, hayhooks would be a strong alternative to langserve. Thanks again for working on it
really need this feature. Is there any recent update? The streaming feature is very important because most of the other third-party UIs or pkgs are called in streaming mode.
In order to have a good LLM chat UX, we need to streame the response to the client. Langserve is doing this with an dedicated endpoint, hayhooks could do the same (pseudocode):
Additionally haystack should provide a special
streaming_callback
that will write the chunk content to a buffer, that will be available to hayhooks. Maybe the Pipeline could add this logic and provides an pipe.stream method that will return a generator or simething like this.