-
Hi,
I have a quick question. Where can I find the recipe for pre-training HuBERT with academic compute?
Paper link: https://arxiv.org/pdf/2306.06672
Thank you.
-
Both librispeech and librispeech_100 recipe contain a symlink to utils directory from kaldi:
https://github.com/espnet/espnet/blob/master/egs2/librispeech/asr1/utils and https://github.com/espnet/e…
-
## 🐛 Bug
Wav2Vec2's newly released fine-tuned conformer checkpoints (see [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#pre-trained-models)) don't produce reasonable results o…
-
# normalize the input to 0 mean and unit std.
if am=='librispeech':
dir='seq_data_librispeech'
norm_mean, norm_std = 3.203, 4.045
elif am=='paiia':
…
-
Unable to convert ASR Conformer CTC from Nvidia NGC to Riva
Step of reproduction:
- Create conda environment
```
conda create --name nemo python==3.10.12
conda activate nemo
conda install p…
-
I saw list files such as "LibriSpeech/list/train.txt" are required parameters for `main.py`. It seems such files are not provided by librispeech officially. What is the format of them? Could you provi…
-
Hi
Thank you for the nice software. Could you please share the information:
1) How long the training on librispeech takes on how many GPUs
2) How fast is the decoding (RTF?)
-
Hi Mirco, Santi,
Thanks again for this great contributions. I had a look at codes and paper. The architecture is interesting. I want to train this architecture on Librispeech for speaker ID in same s…
-
### Description
I've run the following code in google colab with GPU, and it got stuck after printing 'Saving checkpoints for 0 into /content/test/model.ckpt.' Any ideas?
!pip install -q tensor2te…
zhez6 updated
5 years ago
-
When I use s4 decoder to train in Librispeech, asr1, the loss is very well.
However, when I inference with s4 decoder, the WER is very bad. And the inference beamsearch CER is much bigger than train…