-
Related to **Model/Framework(s) or something else (describe)**
*Examples:*
* *GNMT/PyTorch*
* *AMP*
* *Tensorflow 2.0*
* *Jupyter notebooks*
* DALI version - 1.8.0
**Is your feature req…
-
I did experiment on librispeech dataset, I have tried 15 epochs when training, but got 0.46 CER and results seems not good. So could you tell me how many epochs is appropriate for librispeech?
-
### Description
I trained Librispeech clean small problem with Transformer model and I found that there was a great difference of performance when using the hyper-parameters set _transformer_libris…
-
Thanks for your amazing work.
I evaluate the released xcodec model on LibriSpeech test-clean set using ABX error rate metric. I perform the evaluation with the continuous representations before RVQ…
-
Hi,
I've just ended a training of a conformer using the sentencepiece featurizer on LibriSpeech over 50 epochs.
Here are the results if you want to update your readme:
```
dataset_config:
t…
-
Hi,
I'm checking your ASR frontend, specifically the Librispeech audio feature extraction, and have some questions.
References: [Librispeech params](https://github.com/tensorflow/lingvo/blob/d44…
-
Hi Guys, amazing works with the icefall recipes. I am quite new to using the recipes and having a hard time creating a custom dataset using lhotse that I have for my language (Bengali).
I have see…
-
There are several lines of codes unclear in espnet/egs2/owsm_v1/s2t1/local/prepare_librispeech.py
Is there a more accurate script to prepare librispeech for the owsm_v1 training?
FileNotFoundErro…
-
### Description
I have used the T2T to test performance of librispeech before, and I remember the wer on test-clean data is almost 7%, but now the WER is so poor. Also the decoding result be same thi…
-
Hello, I am processing and training the AISHELL-4 dataset using the command:
python diaper/train.py -c DiaPer/models/10attractors/SC_LibriSpeech_2spk_adapted1-10_finetuneAISHELL4mix/train.yaml,
…