-
### Question
I found that decode long audio is much worse than split long audio into multi parts. I use librispeech as dataset and test with `decode_transformer_s2s_ngram.cfg`. e.g.
`LibriSpeech/t…
-
## ❓ Questions and Help
Hi! Thank you for releasing the Wav2vec 2.0 code.
I'm trying to deduce with the Wav2vec pre-train model, but there are some problems.
This is the command I made …
-
### Question
I am getting an error when testing on custom data when following the tutorial (https://github.com/facebookresearch/flashlight/tree/master/flashlight/app/asr/tutorial) to finetune on cust…
-
It shows that no commad named lmplz when i try to train a ngram lm model,what should i do?
-
To whom it may concern,
Hi. Thanks for your efforts first.
I am training a **Cantonese Model** with **self-collected audio data**(around 250 hours) and **external decoder**(around 700M uncompre…
-
### Bug Description
Installing flashlight for wav2letter
Errors while building flashlight with CUDA backend that seem to be related to Intel MKL-DNN (`Library mkl_intel: not found`): `undefined refe…
-
I'm trying to build python bindings for fairseq model, to use it for speech recognition modules.
But while trying to run the command
`pip install -e .` ,
getting below errors :-
```
Default…
-
Hey I am trying to build wave2letter and I am finding that it turns out these symbols are not found for this architecture why is that? @kpu
```Undefined symbols for architecture x86_64:
"lm::ngra…
-
I want to build a character level 20 gram LM. What things are different in this case from building a word-level LM?
-
We are pointing to the kenlm zip file... however, the [official repo](https://github.com/kpu/kenlm/issues/50) points to https://pypi.org/project/kenlm/0.0.0/ as official... can we use that instead?
…