triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.29k stars 1.48k forks source link

triton server python backend how to support streaming transmission #7614

Open endingback opened 1 month ago

endingback commented 1 month ago

How to support streaming text return when inputting an image into a multimodal large model. The algorithm already supports streaming, how does Triton Server support streaming return

GermanGebel commented 4 days ago

Hello! May be vllm_backend can help you to understand stream conception https://github.com/triton-inference-server/vllm_backend/blob/main/src/model.py