-
i tried to reproduce the results in librispeech, and using train_am_tds_ctc.cfg:
--runname=am_tds_ctc_librispeech
--rundir=/root/wav2letter.debug/recipes/models/sota/2019/librispeech/
--archdir=/ro…
-
Hi,
I've a my own dataset of about 300 hours, with custom words from different domain.
I want to train ASR model using TDS Seq2Seq as given [here](https://github.com/facebookresearch/wav2letter/…
-
### A description of what we have done
1. We elaborate a **Spanish training** using this typical example [architecture](https://github.com/facebookresearch/wav2letter/blob/v0.2/tutorials/1-librispe…
-
We have trained a streaming convnet hindi model using recipe provided [here](https://github.com/facebookresearch/wav2letter/tree/master/recipes/streaming_convnets). However after converting it using s…
-
### i would like to use the pre-trained model which has been trained in the Librispeech dataset into my own customized data, with changing only the language model and the Lexicon file.
i used the …
-
## 🐛 Bug
Getting an assertion error (cutoff) when trying to run Wav2Vec 2.0 CTC inference with Transformer LM. What do I need to change to get this working?
### To Reproduce
Steps to reproduc…
-
### Question
I'm using Steaming Convnets [Model](https://github.com/facebookresearch/wav2letter/tree/v0.2/recipes/models/streaming_convnets) to train my own dataset (English) that is from different …
-
I am using CTC-Transformer architecture for English and Hindi speech.
For example it is missing:
- I
- Show
- Add
- Get
- What
- How
- Hi I
Dataset:
About 35k of English/Hindi short se…
-
Hi,
Nice project! Thanks for your work, I wish I had better hw to make use of it :D
I haven't seen anyone mentioning [CLBlast](https://github.com/CNugteren/CLBlast) here?
They actually provide …
-
Hello, I try to create a model using `public_series_1` from Russian dataset [open_stt](https://github.com/snakers4/open_stt). I use this [recipe](https://github.com/flashlight/wav2letter/blob/master/r…