flashlight / wav2letter

Facebook AI Research's Automatic Speech Recognition Toolkit
https://github.com/facebookresearch/wav2letter/wiki
Other
6.39k stars 1.01k forks source link

Can i loaded wav2letter model and use it to serve (infer) as a service? #839

Open phamvandan opened 4 years ago

phamvandan commented 4 years ago

In the scense of service applications, we need to load model one time and serve for client request. But How can i do this?

tlikhomanenko commented 4 years ago

cc @avidov

tlikhomanenko commented 4 years ago

Probably this could help https://github.com/facebookresearch/wav2letter/wiki/Inference-Run-Examples#interactive-streaming-asr-example.

Or do you need to do this with w2l model, not inference model?

phamvandan commented 4 years ago

i think it only supported stream convnet model?

tlikhomanenko commented 4 years ago

You can simply try to do your own main.cpp where you load the model and hang while communicating with some buffer where you get the data. I guess you can have a look on streaming example and example I send, directly into implementation and adapt it to your use case.