Is there support for any language model (seq2seq) not charRNN for TensorRT Inference Server? Preferably, a model like NMT (Neural Machine Translation)?
I would also like to have a working version of SSD plan file (refer issue#12).
Can someone please provide config.pbtxt for SSD and NMt as well?
Providing a wide range of models is beyond the scope of the inference server. I suggest you look on ngc.nvidia.com as well as the various public model "zoos" for different models.
Is there support for any language model (seq2seq) not charRNN for TensorRT Inference Server? Preferably, a model like NMT (Neural Machine Translation)?
I would also like to have a working version of SSD plan file (refer issue#12). Can someone please provide config.pbtxt for SSD and NMt as well?