Being able to use TorchServe with some form of base handler and base server worker without having torch installed.
Motivation, pitch
TorchServe has unique set of features and is quite powerful, while offering flexibility of executing python code during inference and can handle multiple python versions out of the box,
This makes it superior to TF Serving which requires traced models and thus is unusable if I use custom layers calling python code, or nvidia triton, which works outside of the box only with python 3.10 and you need to compile it to work with other python versions.
It's already almost usable, it just needs some refactoring to better split the framework-agnostic code from torch-specific code.
Alternatives
Installing torch even when I don't need it, which takes ~600MB in the docker imagew
mocking the torch by using following script
class Tensor:
pass
def save(val, buff):
print("I am dummy! I am not real torch!")
Additional context
I spent quite some time investigating this topic and comparing them, you can see my talk about it
🚀 The feature
Being able to use TorchServe with some form of base handler and base server worker without having torch installed.
Motivation, pitch
TorchServe has unique set of features and is quite powerful, while offering flexibility of executing python code during inference and can handle multiple python versions out of the box, This makes it superior to TF Serving which requires traced models and thus is unusable if I use custom layers calling python code, or nvidia triton, which works outside of the box only with python 3.10 and you need to compile it to work with other python versions.
It's already almost usable, it just needs some refactoring to better split the framework-agnostic code from torch-specific code.
Alternatives
mocking
the torch by using following scriptAdditional context
I spent quite some time investigating this topic and comparing them, you can see my talk about it