pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.18k stars 850 forks source link

TorchServe inference stream response support #2234

Open lxning opened 1 year ago

lxning commented 1 year ago

🚀 The feature

TorchServe supports streaming response for both HTTP and GRPC endpoint.

Motivation, pitch

Usually the predication latency is high (eg. 5sec) for a large model inference. Some models are able to generate intermediate prediction results (eg. generator AI). This feature will send the intermediate prediction results to user once the results are ready. The user will gradually get the entire response. For example, User may get first intermediate response within 1 sec, and gradually get the entire result until 5sec. This feature is to improve user prediction experience.

Alternatives

No response

Additional context

No response

Isalia20 commented 4 months ago

is there a way to ping torchserve with the request_id to check the status whether it's completed or not? I don't have streaming inference but have long running requests(1min+) if there could be a way to check whether request processing ended or not would be nice.