alphacep / vosk-server

WebSocket, gRPC and WebRTC speech recognition server based on Vosk and Kaldi libraries
Apache License 2.0
934 stars 249 forks source link

run wrong for GPU #196

Open filwu8 opened 2 years ago

filwu8 commented 2 years ago

when i volumes model vosk-model-cn-kaldi-multicn-0.15,it's happend i‘m used image: alphacep/kaldi-vosk-server-gpu



terminate called after throwing an instance of 'kaldi::KaldiFatalError'
  what():  kaldi::KaldiFatalError
WARNING ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:243) Not in compute-exclusive mode.  Suggestion: use 'nvidia-smi -c 3' to set compute exclusive mode
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:438) Selecting from 1 GPUs
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:453) cudaSetDevice(0): NVIDIA GeForce RTX 3090   free:23286M, used:1289M, total:24575M, free/total:0.947529
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:501) Device: 0, mem_ratio: 0.947529
LOG ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:382) Trying to select device: 0
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:511) Success selecting device 0 free mem ratio: 0.947529
LOG ([5.5.1027~1-59386]:FinalizeActiveGpu():cu-device.cc:338) The active GPU is [0]: NVIDIA GeForce RTX 3090    free:22780M, used:1795M, total:24575M, free/total:0.926939 version 8.6

something
Options:
  --acoustic-scale            : Scaling factor for acoustic log-likelihoods (caution: is a no-op if set in the program nnet3-compute (float, default = 0.1)
  --add-pitch                 : Append pitch features to raw MFCC/PLP/filterbank features [but not for iVector extraction] (bool, default = false)
  --aux-q-capacity            : Advanced - Capacity of the auxiliary queue : Maximum number of raw tokens that can be stored *before* pruning for each frame. Lower -> less memory usage, Higher -> More accurate. During the tokens generation, if we detect that we are getting close to saturating that capacity, we will reduce the beam dynamically (adaptive beam) to keep only the best tokens in the remaining space. If the aux queue is still too small, we will print an overflow warning, but prevent the overflow. The computation can safely continue, but the quality of the output may decrease. We strongly recommend keeping aux-q-capacity large (>400k), to avoid triggering the adaptive beam and/or the overflow (-1 = set to 3*main-q-capacity). (int, default = -1)
  --beam                      : Decoding beam. Larger->slower, more accurate. If aux-q-capacity is too small, we may decrease the beam dynamically to avoid overflow (adaptive beam, see aux-q-capacity parameter) (float, default = 15)
  --cmvn-config               : Configuration file for online cmvn features (e.g. conf/online_cmvn.conf). Controls features on nnet3 input (not ivector features). If not set, the OnlineCmvn is disabled. (string, default = "")
  --computation.debug         : If true, turn on debug for the neural net computation (very verbose!) Will be turned on regardless if --verbose >= 5 (bool, default = false)   
  --cuda-decoder-copy-threads : Advanced - Number of worker threads used in the decoder for the host to host copies. (int, default = 2)
  --cuda-worker-threads       : The total number of CPU threads launched to process CPU tasks. -1 = use std::hardware_concurrency(). (int, default = -1)
  --debug-computation         : If true, turn on debug for the actual computation (very verbose!) (bool, default = false)
  --delta                     : Tolerance used in determinization (float, default = 0.000976562)
  --determinize-lattice       : Determinize the lattice before output. (bool, default = true)
  --endpoint.rule1.max-relative-cost : This endpointing rule requires relative-cost of final-states to be <= this value (describes how good the probability of final-states is). (float, default = inf)
  --endpoint.rule1.min-trailing-silence : This endpointing rule requires duration of trailing silence(in seconds) to be >= this value. (float, default = 5)
  --endpoint.rule1.min-utterance-length : This endpointing rule requires utterance-length (in seconds) to be >= this value. (float, default = 0)
  --endpoint.rule1.must-contain-nonsilence : If true, for this endpointing rule to apply there must be nonsilence in the best-path traceback. (bool, default = false)
  --endpoint.rule2.max-relative-cost : This endpointing rule requires relative-cost of final-states to be <= this value (describes how good the probability of final-states is). (float, default = 2)
  --endpoint.rule2.min-trailing-silence : This endpointing rule requires duration of trailing silence(in seconds) to be >= this value. (float, default = 0.5)
  --endpoint.rule2.min-utterance-length : This endpointing rule requires utterance-length (in seconds) to be >= this value. (float, default = 0)
  --endpoint.rule2.must-contain-nonsilence : If true, for this endpointing rule to apply there must be nonsilence in the best-path traceback. (bool, default = true)
  --endpoint.rule3.max-relative-cost : This endpointing rule requires relative-cost of final-states to be <= this value (describes how good the probability of final-states is). (float, default = 8)
  --endpoint.rule3.min-trailing-silence : This endpointing rule requires duration of trailing silence(in seconds) to be >= this value. (float, default = 1)
  --endpoint.rule3.min-utterance-length : This endpointing rule requires utterance-length (in seconds) to be >= this value. (float, default = 0)
  --endpoint.rule3.must-contain-nonsilence : If true, for this endpointing rule to apply there must be nonsilence in the best-path traceback. (bool, default = true)
  --endpoint.rule4.max-relative-cost : This endpointing rule requires relative-cost of final-states to be <= this value (describes how good the probability of final-states is). (float, default = inf)
  --endpoint.rule4.min-trailing-silence : This endpointing rule requires duration of trailing silence(in seconds) to be >= this value. (float, default = 2)
  --endpoint.rule4.min-utterance-length : This endpointing rule requires utterance-length (in seconds) to be >= this value. (float, default = 0)
  --endpoint.rule4.must-contain-nonsilence : If true, for this endpointing rule to apply there must be nonsilence in the best-path traceback. (bool, default = true)
  --endpoint.rule5.max-relative-cost : This endpointing rule requires relative-cost of final-states to be <= this value (describes how good the probability of final-states is). (float, default = inf)
  --endpoint.rule5.min-trailing-silence : This endpointing rule requires duration of trailing silence(in seconds) to be >= this value. (float, default = 0)
  --endpoint.rule5.min-utterance-length : This endpointing rule requires utterance-length (in seconds) to be >= this value. (float, default = 20)
  --endpoint.rule5.must-contain-nonsilence : If true, for this endpointing rule to apply there must be nonsilence in the best-path traceback. (bool, default = false)
  --endpoint.silence-phones   : List of phones that are considered to be silence phones by the endpointing code. (string, default = "")
  --extra-left-context        : Number of frames of additional left-context to add on top of the neural net's inherent left context (may be useful in recurrent setups (int, default = 0)
  --extra-left-context-initial : If >= 0, overrides the --extra-left-context value at the start of an utterance. (int, default = -1)
  --extra-right-context       : Number of frames of additional right-context to add on top of the neural net's inherent right context (may be useful in recurrent setups (int, default = 0)
  --extra-right-context-final : If >= 0, overrides the --extra-right-context value at the end of an utterance. (int, default = -1)
  --fbank-config              : Configuration file for filterbank features (e.g. conf/fbank.conf) (string, default = "")
  --feature-type              : Base feature type [mfcc, plp, fbank] (string, default = "mfcc")
  --frame-subsampling-factor  : Required if the frame-rate of the output (e.g. in 'chain' models) is less than the frame-rate of the original alignment. (int, default = 1)    
  --frames-per-chunk          : Number of frames in each chunk that is separately evaluated by the neural net.  Measured before any subsampling, if the --frame-subsampling-factor options is used (i.e. counts input frames (int, default = 50)
  --global-cmvn-stats         : filename with global stats for OnlineCmvn for features on nnet3 input (not ivector features) (string, default = "")
  --gpu-feature-extract       : Use GPU feature extraction. (bool, default = true)
  --ivector-extraction-config : Configuration file for online iVector extraction, see class OnlineIvectorExtractionConfig in the code (string, default = "")
  --ivector-silence-weighting.max-state-duration : (RE weighting in iVector estimation for online decoding) Maximum allowed duration of a single transition-id; runs with durations longer than this will be weighted down to the silence-weight. (float, default = -1)
  --ivector-silence-weighting.silence-phones : (RE weighting in iVector estimation for online decoding) List of integer ids of silence phones, separated by colons (or commas).  Data that (according to the traceback of the decoder) corresponds to these phones will be downweighted by --silence-weight. (string, default = "")
  --ivector-silence-weighting.silence-weight : (RE weighting in iVector estimation for online decoding) Weighting factor for frames that the decoder trace-back identifies as silence; only relevant if the --silence-phones option is set. (float, default = 1)
  --lattice-beam              : The width of the lattice beam (float, default = 10)
  --main-q-capacity           : Advanced - Capacity of the main queue : Maximum number of tokens that can be stored *after* pruning for each frame. Lower -> less memory usage, Higher -> More accurate. Tokens stored in the main queue were already selected through a max-active pre-selection. It means that for each emitting/non-emitting iteration, we can add at most ~max-active tokens to the main queue. Typically only the emitting iteration creates a large number of tokens. Using main-q-capacity=k*max-active with k=4..10 should be safe. If main-q-capacity is too small, we will print a warning but prevent the overflow. The computation can safely continue, but the quality of the output may decrease (-1 = set to 4*max-active). (int, default = -1)
  --max-active                : At the end of each frame computation, we keep only its best max-active tokens. One token is the instantiation of a single arc. Typical values are within the 5k-10k range. (int, default = 10000)
  --max-batch-size            : The maximum execution batch size. Larger = better throughput, but slower latency. (int, default = 400)
  --max-mem                   : Maximum approximate memory usage in determinization (real usage might be many times this). (int, default = 50000000)
  --mfcc-config               : Configuration file for MFCC features (e.g. conf/mfcc.conf) (string, default = "")
  --minimize                  : If true, push and minimize after determinization. (bool, default = false)
  --ntokens-pre-allocated     : Advanced - Number of tokens pre-allocated in host buffers. If this size is exceeded the buffer will reallocate, reducing performance. (int, default = 1000000)
  --num-channels              : The number of parallel audio channels. This is the maximum number of parallel audio channels supported by the pipeline. This should be larger than max_batch_size. (int, default = 600)
  --online-pitch-config       : Configuration file for online pitch features, if --add-pitch=true (e.g. conf/online_pitch.conf) (string, default = "")
  --optimization.allocate-from-other : Instead of deleting a matrix of a given size and then allocating a matrix of the same size, allow re-use of that memory (bool, default = true)
  --optimization.allow-left-merge : Set to false to disable left-merging of variables in remove-assignments (obscure option) (bool, default = true)
  --optimization.allow-right-merge : Set to false to disable right-merging of variables in remove-assignments (obscure option) (bool, default = true)
  --optimization.backprop-in-place : Set to false to disable optimization that allows in-place backprop (bool, default = true)
  --optimization.consolidate-model-update : Set to false to disable optimization that consolidates the model-update phase of backprop (e.g. for recurrent architectures (bool, default = true)
  --optimization.convert-addition : Set to false to disable the optimization that converts Add commands into Copy commands wherever possible. (bool, default = true)
  --optimization.extend-matrices : This optimization can reduce memory requirements for TDNNs when applied together with --convert-addition=true (bool, default = true)        
  --optimization.initialize-undefined : Set to false to disable optimization that avoids redundant zeroing (bool, default = true)
  --optimization.max-deriv-time : You can set this to the maximum t value that you want derivatives to be computed at when updating the model.  This is an optimization that saves time in the backprop phase for recurrent frameworks (int, default = 2147483647)
  --optimization.max-deriv-time-relative : An alternative mechanism for setting the --max-deriv-time, suitable for situations where the length of the egs is variable.  If set, it is equivalent to setting the --max-deriv-time to this value plus the largest 't' value in any 'output' node of the computation request. (int, default = 2147483647)        
  --optimization.memory-compression-level : This is only relevant to training, not decoding.  Set this to 0,1,2; higher levels are more aggressive at reducing memory by compressing quantities needed for backprop, potentially at the expense of speed and the accuracy of derivatives.  0 means no compression at all; 1 means compression that shouldn't affect results at all. (int, default = 1)
  --optimization.min-deriv-time : You can set this to the minimum t value that you want derivatives to be computed at when updating the model.  This is an optimization that saves time in the backprop phase for recurrent frameworks (int, default = -2147483648)
  --optimization.move-sizing-commands : Set to false to disable optimization that moves matrix allocation and deallocation commands to conserve memory. (bool, default = true) 
  --optimization.optimize     : Set this to false to turn off all optimizations (bool, default = true)
  --optimization.optimize-row-ops : Set to false to disable certain optimizations that act on operations of type *Row*. (bool, default = true)
  --optimization.propagate-in-place : Set to false to disable optimization that allows in-place propagation (bool, default = true)
  --optimization.remove-assignments : Set to false to disable optimization that removes redundant assignments (bool, default = true)
  --optimization.snip-row-ops : Set this to false to disable an optimization that reduces the size of certain per-row operations (bool, default = true)
  --optimization.split-row-ops : Set to false to disable an optimization that may replace some operations of type kCopyRowsMulti or kAddRowsMulti with up to two simpler operations. (bool, default = true)
  --phone-determinize         : If true, do an initial pass of determinization on both phones and words (see also --word-determinize) (bool, default = true)
  --plp-config                : Configuration file for PLP features (e.g. conf/plp.conf) (string, default = "")
  --reset-on-endpoint         : Reset a decoder channel when endpoint detected. Do not close stream (bool, default = false)
  --word-determinize          : If true, do a second pass of determinization on words only (see also --phone-determinize) (bool, default = true)

Standard options:
  --config                    : Configuration file to read (this option may be repeated) (string, default = "")
  --help                      : Print out usage message (bool, default = false)
  --print-args                : Print the command line arguments (to stderr) (bool, default = true)
  --verbose                   : Verbose level (higher->more logging) (int, default = 0)

Command line was:
ERROR ([5.5.1027~1-59386]:ReadConfigFile():parse-options.cc:493) Invalid option --min-active=200 in config file model/conf/model.conf

[ Stack-Trace: ]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogMessage() const+0x7fe) [0x7f370d6ac1de]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogAndThrow::operator=(kaldi::MessageLogger const&)+0x2a) [0x7f370d1adaca]  
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::ParseOptions::ReadConfigFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x3eb) [0x7f370d6a0cdb]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(BatchModel::BatchModel()+0x3fe) [0x7f370d24d56e]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(vosk_batch_model_new+0x20) [0x7f370d24a120]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x6ff5) [0x7f371067cff5]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x640a) [0x7f371067c40a]
/usr/lib/python3/dist-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so(+0x1afd7) [0x7f37106a2fd7]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3(_PyEval_EvalFrameDefault+0x5cf4) [0x570674]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3() [0x59bf26]
python3(_PyObject_MakeTpCall+0x1ff) [0x5f3d7f]
python3(_PyEval_EvalFrameDefault+0x58e6) [0x570266]
python3() [0x5002d8]
/usr/lib/python3.8/lib-dynload/_asyncio.cpython-38-x86_64-linux-gnu.so(+0x7ef9) [0x7f3710f82ef9]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3() [0x5fef43]
python3() [0x5c4927]
python3(PyVectorcall_Call+0x18d) [0x5f2f5d]
python3(_PyEval_EvalFrameDefault+0x68ed) [0x57126d]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(_PyFunction_Vectorcall+0x393) [0x5f6a13]
python3(_PyEval_EvalFrameDefault+0x56b5) [0x570035]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(PyEval_EvalCode+0x27) [0x68d047]
python3() [0x67e351]
python3() [0x67e3cf]
python3() [0x67e471]
python3(PyRun_SimpleFileExFlags+0x197) [0x67e817]
python3(Py_RunMain+0x212) [0x6b6fe2]
python3(Py_BytesMain+0x2d) [0x6b736d]
/usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f37112fe0b3]
python3(_start+0x2e) [0x5fa5ce]

terminate called after throwing an instance of 'kaldi::KaldiFatalError'
  what():  kaldi::KaldiFatalError
nshmyrev commented 2 years ago

Remove --min-active=200 in config file model/conf/model.conf

nshmyrev commented 2 years ago

And use https://alphacephei.com/vosk/models/vosk-model-cn-0.22.zip, it is much more accurate

filwu8 commented 2 years ago

when i Remove --min-active=200 in config file model/conf/model.conf AND And use https://alphacephei.com/vosk/models/vosk-model-cn-0.22.zip

IT’s wrong

WARNING ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:243) Not in compute-exclusive mode.  Suggestion: use 'nvidia-smi -c 3' to set compute exclusive mode
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:438) Selecting from 1 GPUs
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:453) cudaSetDevice(0): NVIDIA GeForce RTX 3090   free:23286M, used:1289M, total:24575M, free/total:0.947529
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:501) Device: 0, mem_ratio: 0.947529
LOG ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:382) Trying to select device: 0
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:511) Success selecting device 0 free mem ratio: 0.947529
LOG ([5.5.1027~1-59386]:FinalizeActiveGpu():cu-device.cc:338) The active GPU is [0]: NVIDIA GeForce RTX 3090    free:22780M, used:1795M, total:24575M, free/total:0.926939 version 8.6
LOG ([5.5.1027~1-59386]:RemoveOrphanNodes():nnet-nnet.cc:948) Removed 0 orphan nodes.
LOG ([5.5.1027~1-59386]:RemoveOrphanComponents():nnet-nnet.cc:847) Removing 0 orphan components.
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:52) Loading HCLG from model/graph/HCLG.fst
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:56) Loading words from model/graph/words.txt
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:64) Loading winfo model/graph/phones/word_boundary.int
ERROR ([5.5.1027~1-59386]:ReadConfigFile():parse-options.cc:462) Cannot open config file: model/conf/ivector.conf

[ Stack-Trace: ]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogMessage() const+0x7fe) [0x7f19ff9021de]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogAndThrow::operator=(kaldi::MessageLogger const&)+0x2a) [0x7f19ff403aca]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::ParseOptions::ReadConfigFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x6cd) [0x7f19ff8f6fbd]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(void kaldi::ReadConfigFromFile<kaldi::OnlineIvectorExtractionConfig>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, kaldi::OnlineIvectorExtractionConfig*)+0x239) [0x7f19ff4db0c9]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::OnlineNnet2FeaturePipelineInfo::OnlineNnet2FeaturePipelineInfo(kaldi::OnlineNnet2FeaturePipelineConfig const&)+0x79a) [0x7f19ff4f098a]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::ReadParametersFromModel()+0x44) [0x7f19ff4afd34]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::Initialize(fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&)+0x16) [0x7f19ff4b0586]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::BatchedThreadedNnet3CudaOnlinePipeline(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipelineConfig const&, fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&, kaldi::nnet3::AmNnetSimple const&, kaldi::TransitionModel const&)+0x952) [0x7f19ff4a70e2]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(BatchModel::BatchModel()+0x93f) [0x7f19ff4a3aaf]       
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(vosk_batch_model_new+0x20) [0x7f19ff4a0120]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x6ff5) [0x7f1a028d2ff5]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x640a) [0x7f1a028d240a]
/usr/lib/python3/dist-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so(+0x1afd7) [0x7f1a028f8fd7]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3(_PyEval_EvalFrameDefault+0x5cf4) [0x570674]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3() [0x59bf26]
python3(_PyObject_MakeTpCall+0x1ff) [0x5f3d7f]
python3(_PyEval_EvalFrameDefault+0x58e6) [0x570266]
python3() [0x5002d8]
/usr/lib/python3.8/lib-dynload/_asyncio.cpython-38-x86_64-linux-gnu.so(+0x7ef9) [0x7f1a031d8ef9]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3() [0x5fef43]
python3() [0x5c4927]
python3(PyVectorcall_Call+0x18d) [0x5f2f5d]
python3(_PyEval_EvalFrameDefault+0x68ed) [0x57126d]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(_PyFunction_Vectorcall+0x393) [0x5f6a13]
python3(_PyEval_EvalFrameDefault+0x56b5) [0x570035]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(PyEval_EvalCode+0x27) [0x68d047]
python3() [0x67e351]
python3() [0x67e3cf]
python3() [0x67e471]
python3(PyRun_SimpleFileExFlags+0x197) [0x67e817]
python3(Py_RunMain+0x212) [0x6b6fe2]
python3(Py_BytesMain+0x2d) [0x6b736d]
/usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f1a035540b3]
python3(_start+0x2e) [0x5fa5ce]

terminate called after throwing an instance of 'kaldi::KaldiFatalError'
  what():  kaldi::KaldiFatalError
WARNING ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:243) Not in compute-exclusive mode.  Suggestion: use 'nvidia-smi -c 3' to set compute exclusive mode
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:438) Selecting from 1 GPUs
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:453) cudaSetDevice(0): NVIDIA GeForce RTX 3090   free:23286M, used:1289M, total:24575M, free/total:0.947529
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:501) Device: 0, mem_ratio: 0.947529
LOG ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:382) Trying to select device: 0
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:511) Success selecting device 0 free mem ratio: 0.947529
LOG ([5.5.1027~1-59386]:FinalizeActiveGpu():cu-device.cc:338) The active GPU is [0]: NVIDIA GeForce RTX 3090    free:22780M, used:1795M, total:24575M, free/total:0.926939 version 8.6
LOG ([5.5.1027~1-59386]:RemoveOrphanNodes():nnet-nnet.cc:948) Removed 0 orphan nodes.
LOG ([5.5.1027~1-59386]:RemoveOrphanComponents():nnet-nnet.cc:847) Removing 0 orphan components.
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:52) Loading HCLG from model/graph/HCLG.fst
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:56) Loading words from model/graph/words.txt
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:64) Loading winfo model/graph/phones/word_boundary.int
ERROR ([5.5.1027~1-59386]:ReadConfigFile():parse-options.cc:462) Cannot open config file: model/conf/ivector.conf

[ Stack-Trace: ]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogMessage() const+0x7fe) [0x7f3bb3af11de]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogAndThrow::operator=(kaldi::MessageLogger const&)+0x2a) [0x7f3bb35f2aca]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::ParseOptions::ReadConfigFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x6cd) [0x7f3bb3ae5fbd]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(void kaldi::ReadConfigFromFile<kaldi::OnlineIvectorExtractionConfig>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, kaldi::OnlineIvectorExtractionConfig*)+0x239) [0x7f3bb36ca0c9]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::OnlineNnet2FeaturePipelineInfo::OnlineNnet2FeaturePipelineInfo(kaldi::OnlineNnet2FeaturePipelineConfig const&)+0x79a) [0x7f3bb36df98a]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::ReadParametersFromModel()+0x44) [0x7f3bb369ed34]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::Initialize(fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&)+0x16) [0x7f3bb369f586]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::BatchedThreadedNnet3CudaOnlinePipeline(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipelineConfig const&, fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&, kaldi::nnet3::AmNnetSimple const&, kaldi::TransitionModel const&)+0x952) [0x7f3bb36960e2]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(BatchModel::BatchModel()+0x93f) [0x7f3bb3692aaf]       
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(vosk_batch_model_new+0x20) [0x7f3bb368f120]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x6ff5) [0x7f3bb6ac1ff5]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x640a) [0x7f3bb6ac140a]
/usr/lib/python3/dist-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so(+0x1afd7) [0x7f3bb6ae7fd7]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3(_PyEval_EvalFrameDefault+0x5cf4) [0x570674]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3() [0x59bf26]
python3(_PyObject_MakeTpCall+0x1ff) [0x5f3d7f]
python3(_PyEval_EvalFrameDefault+0x58e6) [0x570266]
python3() [0x5002d8]
/usr/lib/python3.8/lib-dynload/_asyncio.cpython-38-x86_64-linux-gnu.so(+0x7ef9) [0x7f3bb73c7ef9]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3() [0x5fef43]
python3() [0x5c4927]
python3(PyVectorcall_Call+0x18d) [0x5f2f5d]
python3(_PyEval_EvalFrameDefault+0x68ed) [0x57126d]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(_PyFunction_Vectorcall+0x393) [0x5f6a13]
python3(_PyEval_EvalFrameDefault+0x56b5) [0x570035]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(PyEval_EvalCode+0x27) [0x68d047]
python3() [0x67e351]
python3() [0x67e3cf]
python3() [0x67e471]
python3(PyRun_SimpleFileExFlags+0x197) [0x67e817]
python3(Py_RunMain+0x212) [0x6b6fe2]
python3(Py_BytesMain+0x2d) [0x6b736d]
/usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f3bb77430b3]
python3(_start+0x2e) [0x5fa5ce]

terminate called after throwing an instance of 'kaldi::KaldiFatalError'
  what():  kaldi::KaldiFatalError
WARNING ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:243) Not in compute-exclusive mode.  Suggestion: use 'nvidia-smi -c 3' to set compute exclusive mode
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:438) Selecting from 1 GPUs
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:453) cudaSetDevice(0): NVIDIA GeForce RTX 3090   free:23286M, used:1289M, total:24575M, free/total:0.947529
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:501) Device: 0, mem_ratio: 0.947529
LOG ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:382) Trying to select device: 0
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:511) Success selecting device 0 free mem ratio: 0.947529
LOG ([5.5.1027~1-59386]:FinalizeActiveGpu():cu-device.cc:338) The active GPU is [0]: NVIDIA GeForce RTX 3090    free:22780M, used:1795M, total:24575M, free/total:0.926939 version 8.6
LOG ([5.5.1027~1-59386]:RemoveOrphanNodes():nnet-nnet.cc:948) Removed 0 orphan nodes.
LOG ([5.5.1027~1-59386]:RemoveOrphanComponents():nnet-nnet.cc:847) Removing 0 orphan components.
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:52) Loading HCLG from model/graph/HCLG.fst
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:56) Loading words from model/graph/words.txt
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:64) Loading winfo model/graph/phones/word_boundary.int
ERROR ([5.5.1027~1-59386]:ReadConfigFile():parse-options.cc:462) Cannot open config file: model/conf/ivector.conf

[ Stack-Trace: ]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogMessage() const+0x7fe) [0x7ffbd46451de]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogAndThrow::operator=(kaldi::MessageLogger const&)+0x2a) [0x7ffbd4146aca]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::ParseOptions::ReadConfigFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x6cd) [0x7ffbd4639fbd]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(void kaldi::ReadConfigFromFile<kaldi::OnlineIvectorExtractionConfig>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, kaldi::OnlineIvectorExtractionConfig*)+0x239) [0x7ffbd421e0c9]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::OnlineNnet2FeaturePipelineInfo::OnlineNnet2FeaturePipelineInfo(kaldi::OnlineNnet2FeaturePipelineConfig const&)+0x79a) [0x7ffbd423398a]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::ReadParametersFromModel()+0x44) [0x7ffbd41f2d34]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::Initialize(fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&)+0x16) [0x7ffbd41f3586]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::BatchedThreadedNnet3CudaOnlinePipeline(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipelineConfig const&, fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&, kaldi::nnet3::AmNnetSimple const&, kaldi::TransitionModel const&)+0x952) [0x7ffbd41ea0e2]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(BatchModel::BatchModel()+0x93f) [0x7ffbd41e6aaf]       
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(vosk_batch_model_new+0x20) [0x7ffbd41e3120]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x6ff5) [0x7ffbd7615ff5]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x640a) [0x7ffbd761540a]
/usr/lib/python3/dist-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so(+0x1afd7) [0x7ffbd763bfd7]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3(_PyEval_EvalFrameDefault+0x5cf4) [0x570674]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3() [0x59bf26]
python3(_PyObject_MakeTpCall+0x1ff) [0x5f3d7f]
python3(_PyEval_EvalFrameDefault+0x58e6) [0x570266]
python3() [0x5002d8]
/usr/lib/python3.8/lib-dynload/_asyncio.cpython-38-x86_64-linux-gnu.so(+0x7ef9) [0x7ffbd7f1bef9]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3() [0x5fef43]
python3() [0x5c4927]
python3(PyVectorcall_Call+0x18d) [0x5f2f5d]
python3(_PyEval_EvalFrameDefault+0x68ed) [0x57126d]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(_PyFunction_Vectorcall+0x393) [0x5f6a13]
python3(_PyEval_EvalFrameDefault+0x56b5) [0x570035]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(PyEval_EvalCode+0x27) [0x68d047]
python3() [0x67e351]
python3() [0x67e3cf]
python3() [0x67e471]
python3(PyRun_SimpleFileExFlags+0x197) [0x67e817]
python3(Py_RunMain+0x212) [0x6b6fe2]
python3(Py_BytesMain+0x2d) [0x6b736d]
/usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7ffbd82970b3]
python3(_start+0x2e) [0x5fa5ce]

terminate called after throwing an instance of 'kaldi::KaldiFatalError'
  what():  kaldi::KaldiFatalError
WARNING ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:243) Not in compute-exclusive mode.  Suggestion: use 'nvidia-smi -c 3' to set compute exclusive mode
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:438) Selecting from 1 GPUs
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:453) cudaSetDevice(0): NVIDIA GeForce RTX 3090   free:23286M, used:1289M, total:24575M, free/total:0.947529
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:501) Device: 0, mem_ratio: 0.947529
LOG ([5.5.1027~1-59386]:SelectGpuId():cu-device.cc:382) Trying to select device: 0
LOG ([5.5.1027~1-59386]:SelectGpuIdAuto():cu-device.cc:511) Success selecting device 0 free mem ratio: 0.947529
LOG ([5.5.1027~1-59386]:FinalizeActiveGpu():cu-device.cc:338) The active GPU is [0]: NVIDIA GeForce RTX 3090    free:22780M, used:1795M, total:24575M, free/total:0.926939 version 8.6
LOG ([5.5.1027~1-59386]:RemoveOrphanNodes():nnet-nnet.cc:948) Removed 0 orphan nodes.
LOG ([5.5.1027~1-59386]:RemoveOrphanComponents():nnet-nnet.cc:847) Removing 0 orphan components.
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:52) Loading HCLG from model/graph/HCLG.fst
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:56) Loading words from model/graph/words.txt
LOG ([5.5.1027~1-59386]:BatchModel():batch_model.cc:64) Loading winfo model/graph/phones/word_boundary.int
ERROR ([5.5.1027~1-59386]:ReadConfigFile():parse-options.cc:462) Cannot open config file: model/conf/ivector.conf

[ Stack-Trace: ]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogMessage() const+0x7fe) [0x7f869a2a21de]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::MessageLogger::LogAndThrow::operator=(kaldi::MessageLogger const&)+0x2a) [0x7f8699da3aca]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::ParseOptions::ReadConfigFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x6cd) [0x7f869a296fbd]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(void kaldi::ReadConfigFromFile<kaldi::OnlineIvectorExtractionConfig>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, kaldi::OnlineIvectorExtractionConfig*)+0x239) [0x7f8699e7b0c9]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::OnlineNnet2FeaturePipelineInfo::OnlineNnet2FeaturePipelineInfo(kaldi::OnlineNnet2FeaturePipelineConfig const&)+0x79a) [0x7f8699e9098a]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::ReadParametersFromModel()+0x44) [0x7f8699e4fd34]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::Initialize(fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&)+0x16) [0x7f8699e50586]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipeline::BatchedThreadedNnet3CudaOnlinePipeline(kaldi::cuda_decoder::BatchedThreadedNnet3CudaOnlinePipelineConfig const&, fst::Fst<fst::ArcTpl<fst::TropicalWeightTpl<float> > > const&, kaldi::nnet3::AmNnetSimple const&, kaldi::TransitionModel const&)+0x952) [0x7f8699e470e2]
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(BatchModel::BatchModel()+0x93f) [0x7f8699e43aaf]       
/usr/local/lib/python3.8/dist-packages/vosk-0.3.32-py3.8.egg/vosk/libvosk.so(vosk_batch_model_new+0x20) [0x7f8699e40120]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x6ff5) [0x7f869d272ff5]
/usr/lib/x86_64-linux-gnu/libffi.so.7(+0x640a) [0x7f869d27240a]
/usr/lib/python3/dist-packages/_cffi_backend.cpython-38-x86_64-linux-gnu.so(+0x1afd7) [0x7f869d298fd7]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3(_PyEval_EvalFrameDefault+0x5cf4) [0x570674]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3() [0x59bf26]
python3(_PyObject_MakeTpCall+0x1ff) [0x5f3d7f]
python3(_PyEval_EvalFrameDefault+0x58e6) [0x570266]
python3() [0x5002d8]
/usr/lib/python3.8/lib-dynload/_asyncio.cpython-38-x86_64-linux-gnu.so(+0x7ef9) [0x7f869db78ef9]
python3(_PyObject_MakeTpCall+0x29e) [0x5f3e1e]
python3() [0x5fef43]
python3() [0x5c4927]
python3(PyVectorcall_Call+0x18d) [0x5f2f5d]
python3(_PyEval_EvalFrameDefault+0x68ed) [0x57126d]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyFunction_Vectorcall+0x1b6) [0x5f6836]
python3(_PyEval_EvalFrameDefault+0x85a) [0x56b1da]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(_PyFunction_Vectorcall+0x393) [0x5f6a13]
python3(_PyEval_EvalFrameDefault+0x56b5) [0x570035]
python3(_PyEval_EvalCodeWithName+0x26a) [0x56939a]
python3(PyEval_EvalCode+0x27) [0x68d047]
python3() [0x67e351]
python3() [0x67e3cf]
python3() [0x67e471]
python3(PyRun_SimpleFileExFlags+0x197) [0x67e817]
python3(Py_RunMain+0x212) [0x6b6fe2]
python3(Py_BytesMain+0x2d) [0x6b736d]
/usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f869def40b3]
python3(_start+0x2e) [0x5fa5ce]

terminate called after throwing an instance of 'kaldi::KaldiFatalError'
  what():  kaldi::KaldiFatalError