jcsilva / docker-kaldi-gstreamer-server

Dockerfile for kaldi-gstreamer-server.
BSD 2-Clause "Simplified" License
289 stars 139 forks source link

Failing worker #42

Closed gospodima closed 5 years ago

gospodima commented 5 years ago

Hi, I'm trying to start the server inside the container, but worker fails. That's what I have in worker.log:

libudev: udev_has_devtmpfs: name_to_handle_at on /dev: Operation not permitted
libdc1394 error: Failed to initialize libdc1394
   DEBUG 2018-10-17 15:07:51,451 Starting up worker
2018-10-17 15:07:51 -    INFO:    decoder: Creating decoder using conf: {'timeout-decoder': 10, 'post-processor': "perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\\1./;'", 'logging': {'version': 1, 'root': {'level': 'DEBUG', 'handlers': ['console']}, 'formatters': {'simpleFormater': {'datefmt': '%Y-%m-%d %H:%M:%S', 'format': '%(asctime)s - %(levelname)7s: %(name)10s: %(message)s'}}, 'disable_existing_loggers': False, 'handlers': {'console': {'formatter': 'simpleFormater', 'class': 'logging.StreamHandler', 'level': 'DEBUG'}}}, 'decoder': {'word-syms': '/opt/kaldi_models/en/yes_no/words.txt', 'model': '/opt/kaldi_models/en/yes_no/final.mdl', 'fst': '/opt/kaldi_models/en/yes_no/HCLG.fst'}, 'silence-timeout': 60, 'out-dir': '/tmp', 'use-vad': False}
2018-10-17 15:07:51 -    INFO:    decoder: Setting decoder property: word-syms = /opt/kaldi_models/en/yes_no/words.txt
2018-10-17 15:07:51 -    INFO:    decoder: Setting decoder property: model = /opt/kaldi_models/en/yes_no/final.mdl
2018-10-17 15:07:51 -    INFO:    decoder: Setting decoder property: fst = /opt/kaldi_models/en/yes_no/HCLG.fst
2018-10-17 15:07:51 -    INFO:    decoder: Created GStreamer elements
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstAppSrc object at 0x7f4b89f0f320 (GstAppSrc at 0x1fb07b0)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstDecodeBin object at 0x7f4b89f0f2d0 (GstDecodeBin at 0x2022060)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstAudioConvert object at 0x7f4b89f0f4b0 (GstAudioConvert at 0x202b8f0)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstAudioResample object at 0x7f4b89f0f410 (GstAudioResample at 0x1e1af70)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstTee object at 0x7f4b89f0f460 (GstTee at 0x2039000)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstQueue object at 0x7f4b89f0f550 (GstQueue at 0x203c170)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstFileSink object at 0x7f4b89f0f5a0 (GstFileSink at 0x2040400)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstQueue object at 0x7f4b89f0f5f0 (GstQueue at 0x203c460)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstCutter object at 0x7f4b89f0f640 (GstCutter at 0x2046010)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstOnlineGmmDecodeFaster object at 0x7f4b89f0f690 (GstOnlineGmmDecodeFaster at 0x2057000)> to the pipeline
2018-10-17 15:07:51 -   DEBUG:    decoder: Adding <__main__.GstFakeSink object at 0x7f4b89f0f6e0 (GstFakeSink at 0x1e7ea10)> to the pipeline
2018-10-17 15:07:51 -    INFO:    decoder: Linking GStreamer elements
2018-10-17 15:07:51 -    INFO:    decoder: Setting pipeline to READY
ERROR ([5.4.176~1-be967]:Input():kaldi-io.cc:756) Error opening input stream /opt/kaldi_models/en/yes_no/final.mdl

[ Stack-Trace: ]
kaldi::MessageLogger::HandleMessage(kaldi::LogMessageEnvelope const&, char const*)
kaldi::MessageLogger::~MessageLogger()
kaldi::Input::Input(std::string const&, bool*)

gst_element_change_state

gst_element_change_state

ffi_call_unix64
.
.
.
python() [0x4b988b]
PyEval_EvalFrameEx
PyEval_EvalFrameEx
PyEval_EvalCodeEx
python() [0x50160f]
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
__libc_start_main
python() [0x497b8b]

terminate called after throwing an instance of 'std::runtime_error'
  what():

I thoght that I have the same problem as in #4 but attempt to run the server with yes_no model from the issue above gives the same error.

jcsilva commented 5 years ago

Hi, I'm really busy these weeks and I don't have too much time to help you now... It seems you are using a GMM monophone model. I've never tried this server with this type of model and I don't know if it is supported. Could you please try another acoustic model? It should work wih a nnet3-based one.

I would suggest to check this file: https://github.com/jcsilva/docker-kaldi-gstreamer-server/blob/master/examples/practical-example/Dockerfile especially lines 7 - 11, where the model is downloaded and configured.

Sorry for not doing better now...

gospodima commented 5 years ago

I tried to use kaldi model from practical-example and it works fine.

Thanks!