Closed dulvqingyunLT closed 3 years ago
Is your feature request related to a problem? Please describe. if we use some other AI accelerators, but not GPUs, is it possible that we use TIS to do inference serving?
It is possible to use Python backend to run on TPUs. https://github.com/triton-inference-server/python_backend
Closing due to inactivity.
Is your feature request related to a problem? Please describe. if we use some other AI accelerators, but not GPUs, is it possible that we use TIS to do inference serving?