-
Hi,
I am getting around 3% wer in fast-beam-search and greedy-search. However, I am getting 70% WER when I use fast-beam-search-ngram. My decode configuration looks as below. I am using pruned_tran…
-
Hi!
I am currently working on a streaming Transformer Transducer (T-T) myself (using Tensorflow) but I'm struggling to get started with the actual inference part. I've been referred to your reposit…
-
使用c-api-demo编译得到decode-file-c-api进行文件读取和热词处理,提示: Cannot find ID for token THE at line: THE. (Hint: words on the same line are separated by spaces)、405 Failed to encode some hotwords, skip them already…
-
When I tried fine-tuning, I found that the WER value after fine-tuning was over 100, which should be a problem with the fine-tuning. Below is a simple fine-tuning log document of mine, I'm not sure if…
-
I'm currently using `sherpa/bin/streaming_pruned_transducer_statelessX/streaming_server.py` with all the underlying c++ code for modified_beam_search (`RnntConformerModel`, `StreamingModifiedBeamSearc…
-
Hello,
Im trying to convert a pertained model trained on librispeech, but I run into the following error
```
tensorflow.lite.python.convert_phase.ConverterError: input resource[0] expected type r…
-
Hello, I'm currently testing the emformer model and have copied the parameters from "sherpa-cnn-conv-emformer-transducer-small-2023-01-09" into my own model. While the finished emformer model (2023-01…
-
I am trying to decode model trained with receipe **pruned_transducer_stateless7_streaming**. I am able to successfully decode with fast beam search (without LG), however, when I try to decode with LG …
-
I have a Japanese model using the streaming Zipformer trained with the command below, and I noticed a (~1.0%) difference between my decoding results with decode.py and streaming_decode.py.
> ./prun…
-
Hi,
It is really great work. Thank you very much for the streaming transducers. Is it possible to add hints at runtime (In streaming transducers)(Section: 4) (Say I have some names which are not a…