triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.18k stars 1.46k forks source link

Support for SSD, NMT is missing from TensorRT Inference Server #376

Closed DebashisGanguly closed 5 years ago

DebashisGanguly commented 5 years ago

Is there support for any language model (seq2seq) not charRNN for TensorRT Inference Server? Preferably, a model like NMT (Neural Machine Translation)?

I would also like to have a working version of SSD plan file (refer issue#12). Can someone please provide config.pbtxt for SSD and NMt as well?

deadeyegoodwin commented 5 years ago

Providing a wide range of models is beyond the scope of the inference server. I suggest you look on ngc.nvidia.com as well as the various public model "zoos" for different models.

aj7tesh commented 3 years ago

is nmt supported now ?