alphacep / vosk-server

WebSocket, gRPC and WebRTC speech recognition server based on Vosk and Kaldi libraries
Apache License 2.0
882 stars 243 forks source link

question/idea: docker models injection #187

Closed codemeister64 closed 2 years ago

codemeister64 commented 2 years ago

Hi! I have a question: why we are using several docker files for each language model?

I think we could mount /opt/vosk-server/***/model folder, so we can move out models from docker container. Also we could introduce some env variable to specify custom model name instead of default one.

With this change docker containers will have smaller size and it eliminates need to rebuild container if I want to try different models (or my own).

What do you think? Or there are some cases that are covered by current approach?

sskorol commented 2 years ago

Agree, it's quite easy to do via volumes. Did it in my own GPU repo.

nshmyrev commented 2 years ago

You can bind model folder with -v even now. I've updated the docs about it.

https://alphacephei.com/vosk/server#usage