Closed Shashika007 closed 3 years ago
Hi @Shashika007, thank you for feedback - we are currently standardizing and automating the generation of the readmes describing model deployment on Triton. Due to this standardization, readmes such as the one for FastPitch, may lack guidelines specific to the model. We recognize the need to provide more information about specific model to help you run the deployment and also understand what happens in the process. Regarding the use of audio generator, this is best done using an ensemble of models FastPitch and WaveGlow (or Mel-gan) on Triton server, such that the server outputs audio. We don't have readme for setting up this ensemble but you can refer to Jasper readme for more information.
Hi @GrzegorzKarchNV, Thank you for your reply. will come up with any issue if occurs once i try in the way you mentioned.
Hi @Shashika007 , Were you able to generate mel-spectogram through triton client?
Related to Fastpitch/Triton client feature implementation
I have followed your Faspitch triton in T4 and it works without any issue. I successfully ran the online and offline test. But, i would like to know about the detail how to inference trough triton client.
Examples:
https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechRecognition/Jasper/triton In this example, it has given how to inference a text by passing a audio trough triton client.
https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2/notebooks/triton Over here, in the jupyter notebook, it explains how to generate a an audio file by given text using audio generators and it uses the triton client.
Is your feature request related to a problem? Please describe. I am trying to infer and output an audio trough triton server. But, the way to do that trough a client and how to use audio generators are not clearly mentioned in your example.
Describe the solution you'd like I would like to get some example or guide regarding the issues i have mentioned above. As a summery,
Thank you, Shashika