bentoml / BentoML

The easiest way to serve AI apps and models - Build reliable Inference APIs, LLM apps, Multi-model chains, RAG service, and much more!
https://bentoml.com
Apache License 2.0
7.03k stars 781 forks source link

feat: Response streaming over gRPC #4170

Open Bec-k opened 1 year ago

Bec-k commented 1 year ago

Feature request

Would be nice to have a streaming feature for generation API, so that response would stream token per token and won't wait until full response is generated. gRPC have built-in support for streaming responses, proto code generation also does that. Only work is required in your server, to pipe tokens into the stream.

Motivation

This feature would allow to stream response while it is generating, instead of waiting until it is fully generated.

Other

No response

aarnphm commented 1 year ago

This would requires BentoML gRPC feature to support streaming, which it is not currently

aarnphm commented 1 year ago

Streaming is now supported via SSE. gRPC streaming will requires streaming support for gRPC on BentoML. I'm going to transfer this to BentoML for now since SSE should be sufficient enough for most use case.

Bec-k commented 1 year ago

Any documentation is available for that?

Bec-k commented 1 year ago

I guess this? https://docs.bentoml.org/en/latest/guides/streaming.html

Bec-k commented 1 year ago

Streaming is now supported via SSE. gRPC streaming will requires streaming support for gRPC on BentoML. I'm going to transfer this to BentoML for now since SSE should be sufficient enough for most use case.

Well, not really. There are a lot of AI pipelines happening internally between servers. There are mostly kafka or gRPC streaming used to communicate between them.

npuichigo commented 1 year ago

@aarnphm Any roadmap or plan to support grpc streaming?

parano commented 1 year ago

Hi @npuichigo @Bec-k - I would love to connect and hear more about your use case regarding gRPC streaming support, this could really help the team & community to prioritize supporting it. Could you drop me a DM in our community Slack?