Open chenyangMl opened 3 years ago
The HTTP query params are not available to a python model. Can you describe what per-inference-request information you want to pass into the python?
Can we open this issue? I am also looking for this feature, basically, I need to pass some configuration into the per-inference-request to condition my model. I can potentially create additional inputs but it's a bit overkill and I will have to encode all my settings into tensor.
Any updates on this?
any news?
Description Thanks for this remarkable work, i deploy model with a variable execpt input tesnor. So i wanna to send this variable via query_params during each infer request.
But i can not find a func or solution of "triton_python_backend_utils".Triton client sends query_params, hot to get query_params at triton server?
Triton Information nvcr.io/nvidia/tritonserver:20.12-py3
To Reproduce
Expected behavior triton client infer pass query_params, and get query_params at triton server.