I am trying to run 2 models in a single machine using kaldi-gstreamer. But how can I make the server to handle worker binding to different models. ?
Here is how I am calling 2 workers for 2 models.
python kaldigstserver/worker.py -u wss://<IP>:8888/worker/ws/speech -c model1.yaml [ Model 1 ]
python kaldigstserver/worker.py -u wss://<IP>:8888/worker/ws/speech -c model2.yaml [ Model 2 ]
But, how the server can identify which input should be sent to which worker ?
Is there any way to specify in Server code to identify worker with the model ?
OR
Is it possible to run Kaldi-Gstreamer-Server in 2 ports in a single machine ?
I tried running master-server.py with 2 different ports. I assigned 1 worker to one port and 1 worker to another port,
but only one Server-Worker pair is connecting.
I am trying to run 2 models in a single machine using kaldi-gstreamer. But how can I make the server to handle worker binding to different models. ? Here is how I am calling 2 workers for 2 models.
But, how the server can identify which input should be sent to which worker ? Is there any way to specify in Server code to identify worker with the model ?
OR
Is it possible to run Kaldi-Gstreamer-Server in 2 ports in a single machine ?
I tried running master-server.py with 2 different ports. I assigned 1 worker to one port and 1 worker to another port, but only one Server-Worker pair is connecting.
Is there any solution to this ?