Open iamthebot opened 7 months ago
Have you taken a look at pytriton? It might be helpful in your case. https://github.com/triton-inference-server/pytriton
@yinggeh I have. But this issue is for the server binary itself. Eg; if one wants to deploy triton inference server without using the NVIDIA docker images. Installation via conda makes that very easy.
@iamthebot Thanks. I will pass your feedback to the team.
DLIS-6303
Is your feature request related to a problem? Please describe. It would be handy to be able to install optimize builds of the inference server via conda (via either the nvidia or conda-forge channels).
Currently, only the python client is published.
Happy to take a stab at this unless NVIDIA has concrete plans to work on this already?
Describe the solution you'd like A clear and concise description of what you want to happen.
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
Additional context Add any other context or screenshots about the feature request here.