Open Bec-k opened 1 year ago
This would requires BentoML gRPC feature to support streaming, which it is not currently
Streaming is now supported via SSE. gRPC streaming will requires streaming support for gRPC on BentoML. I'm going to transfer this to BentoML for now since SSE should be sufficient enough for most use case.
Any documentation is available for that?
I guess this? https://docs.bentoml.org/en/latest/guides/streaming.html
Streaming is now supported via SSE. gRPC streaming will requires streaming support for gRPC on BentoML. I'm going to transfer this to BentoML for now since SSE should be sufficient enough for most use case.
Well, not really. There are a lot of AI pipelines happening internally between servers. There are mostly kafka or gRPC streaming used to communicate between them.
@aarnphm Any roadmap or plan to support grpc streaming?
Hi @npuichigo @Bec-k - I would love to connect and hear more about your use case regarding gRPC streaming support, this could really help the team & community to prioritize supporting it. Could you drop me a DM in our community Slack?
Feature request
Would be nice to have a streaming feature for generation API, so that response would stream token per token and won't wait until full response is generated. gRPC have built-in support for streaming responses, proto code generation also does that. Only work is required in your server, to pipe tokens into the stream.
Motivation
This feature would allow to stream response while it is generating, instead of waiting until it is fully generated.
Other
No response