-
I have tried those commands for converting both to intermediate state to start interpolation of both language models
```
bin/lmplz -o 3 --intermediate set1.intermediate -7.998156 0
> -7.995031 5…
-
-
-
### Question
The lexicon-based beam search decoder currently has a fixed lexicon and thus a closed set of words that an ASR model can recognise. Is there a way to input a list of additional words/phr…
-
在运行时候,好像是kenlm部分的引入报错了,这个模型需要自己下载吗?可以怎么解决呢?
![image](https://github.com/user-attachments/assets/791f6e88-94ba-40d5-9d7a-27fbf81e0292)
-
The topic of LM training came up again recently.
The aligner produces weighted alignment lattices. There is some evidence that augmenting the Maximization step in the EM alignment process with the…
-
GPUs are too expensive, so I can only perform inference with CPUs
I try to finetune the XLS-R 300M model with some data and then when I try to perform inference on CPU.
but it's too slow,RTF ex…
-
Hi, @timediv. I changed in the recording.py
```
print('Recording audio')
raw_audio, sample_width = recorder.record()
raw_audio = np.array(raw_audio)
```
to
```
import soundfile as sf
raw_aud…
-
### Question
I'm using Steaming Convnets [Model](https://github.com/facebookresearch/wav2letter/tree/v0.2/recipes/models/streaming_convnets) to train my own dataset (English) that is from different …
-
I have completed Training till 60 epochs on our own English dataset, we followed all the rules correctly and added the data and got the best acoustic model and have started the decoding, below is the …