triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.08k stars 1.45k forks source link

Pass a python dict to triton server python backend #5391

Open ukemamaster opened 1 year ago

ukemamaster commented 1 year ago

@Tabrizian In this example (line 131) they pass a python dict in the headers arg of triton_client.infer() call. Is there a way to get this dict in the model.pywhile using python_backend ?

Tabrizian commented 1 year ago

Unfortunately, it is not possible right now. There is a similar request here: https://github.com/triton-inference-server/server/issues/3998