triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.12k stars 1.46k forks source link

ehancement(client): Python type-hints #5238

Open aarnphm opened 1 year ago

aarnphm commented 1 year ago

It would be nice if the client for triton-inference-server support type hints.

A nice addition is to include generated type hints for protobuf stubs for model_config.proto and grpc_service.proto. 😃

Tabrizian commented 1 year ago

cc @jbkyang-nvi

jbkyang-nvi commented 1 year ago

Hi @aarnphm it looks like this is a simple enhancement on our end since it looks like from https://github.com/protocolbuffers/protobuf/issues/2638 this is a supported feature. Added to our backlog