Open phamvandan opened 4 years ago
cc @avidov
Probably this could help https://github.com/facebookresearch/wav2letter/wiki/Inference-Run-Examples#interactive-streaming-asr-example.
Or do you need to do this with w2l model, not inference model?
i think it only supported stream convnet model?
You can simply try to do your own main.cpp where you load the model and hang while communicating with some buffer where you get the data. I guess you can have a look on streaming example and example I send, directly into implementation and adapt it to your use case.
In the scense of service applications, we need to load model one time and serve for client request. But How can i do this?