Open luomingshuang opened 2 years ago
The next generation Kaldi is developing rapidly. We have gotten some competitive results in some large and popular datasets based on k2, icefall, and Lhotse. Now, we want to apply it to many datasets. We welcome more volunteers to the work of adding recipes with new datasets or new models. You can comment on this issue if you want to add some dataset or model. So I can put your name on the appropriate volunteer place. This can avoid the overlap of everyone's work. (Note: You can choose some dataset by yourself even if it doesn't appear in the following form.) | dataset | model | volunteer |
---|---|---|---|
WenetSpeech | pruned_transducer_stateless2 | @luomingshuang | |
AISHELL4 | pruned_transducer_stateless5 #399 | @luomingshuang | |
MGB2 | conformer_ctc #396 | @AmirHussein96 | |
Swithboard | @ngoel17 | ||
TAL_CSASR | pruned_transducer_stateless5 | @luomingshuang | |
AISHELL2 | pruned_transducer_stateless5 (Done!) | @yuekaizhang | |
AISHELL3 | |||
THCHS-30 | |||
TED-LIUM | |||
TED-LIUMv2 | |||
Iban | |||
TIBMD@MUC | |||
.. | .. | .. |
I suggest you can have look at this concrete tutorial https://icefall.readthedocs.io/en/latest/contributing/how-to-create-a-recipe.html
Here, I want to provide a simple tutorial about how to add a recipe quickly and easily. Before you build a recipe, I strongly suggest you look at our other existing recipes https://github.com/k2-fsa/icefall/tree/master/egs and this tutorial https://icefall.readthedocs.io/en/latest/contributing/how-to-create-a-recipe.html. You will find that there is not much you need to modify or add. There are some steps about how to add a recipe for icefall (You don't have to be afraid to make mistakes, because many people will help you complete it together as long as you submit your PR):
if I build a pruned_transducer_stateless2 recipe for an English dataset, such as tedlium:
egs/tedlium/ASR
and cd egs/tedlium/ASR
. Then you can establish a soft connection for shared
with ln -s ../../../egs/librispeech/ASR/shared .
prepare.sh
to prepare the data for training and testing. (I suggest you learn fro m other prepare.sh, such as egs/librispeech/ASR/prepare.sh
, egs/tedlium3/ASR/prepare.sh
.) You have to build a directory called local
. The files and functions used in prepare.sh
are in local
. You can copy the py files from egs/librispeech/ASR/local/
or egs/tedlium3/ASR/local/
. You should use the text to train BPE model. BTW, you have to build a compute_fbank_tedlium.py
to compute the fbank feature. If there is no recipe for this dataset in Lhotse, you can submit a PR to build it.pruned_transducer_stateless2
. It includes the training and decoding files. Here, you can copy the files from egs/librispeech/ASR/pruned_transducer_stateless2
to egs/tedlium/pruned_transducer_stateless2
. And you also need to do some changes where the data is read in train.py and decode.py, modifying to the corresponding dataset name (such as change librispeech to tedlium
). Then you can use GPUs for training.pruned_transducer_stateless2
. You can choose use which decoding method, epoch, and average for decoding.if I build a pruned_transducer_stateless2 recipe for a Chinese dataset, such as thchs30:
egs/thchs30/ASR
and cd egs/thchs30/ASR
. Then you can establish a soft connection for shared
with ln -s ../../../egs/aishell/ASR/shared .
prepare.sh
to prepare the data for training and testing. (I suggest you learn fro m other prepare.sh, such as egs/wenetspeech/ASR/prepare.sh
, egs/aidatatang_200zh/ASR/prepare.sh
.) You have to build a directory called local
. The files and functions used in prepare.sh
are in local
. You can copy the py files from egs/wenetspeech/ASR/local/
or egs/aidatatang_200zh/ASR/local/
. You can decide whether word segmentation is necessary according to your dataset text. BTW, you have to build a compute_fbank_thchs30.py
to compute the fbank feature. If there is no recipe for this dataset in Lhotse, you can submit a PR to build it.pruned_transducer_stateless2
. It includes the training and decoding files. Here, you can copy the files from egs/wenetspeech/ASR/pruned_transducer_stateless2
to egs/thchs30/pruned_transducer_stateless2
. And you also need to do some changes where the data is read in train.py and decode.py, modifying to the corresponding dataset name (such as change wenetspeech to thchs30
). Then you can use GPUs for training.pruned_transducer_stateless2
. You can choose use which decoding method, epoch, and average for decoding.Could you add the tutorial to https://github.com/k2-fsa/icefall/tree/master/docs ?
Oh, I find there are very concrete tutorial in https://github.com/k2-fsa/icefall/tree/master/docs. I just write a simple tutorial here. I think https://icefall.readthedocs.io/en/latest/contributing/index.html is enough.
fisher-swbd recipe coming soon.
I would like try Pruned_Stateless_Transducer_2 on aishell2 if no one is doing it.
I would like try Pruned_Stateless_Transducer_2 on aishell2 if no one is doing it.
@yuekaizhang You are very welcome.
PS: Please use Pruned_Stateless_Transducer_4
or Pruned_Stateless_Transducer_5
, which supports saving
averaged models periodically during training. It helps to improve the performance.
Mentioning here to avoid recipe duplication. I will work on recipes for AMI and AliMeeting this fall. For both these datasets, there are close-talk and far-field recordings available. The idea would be to train a single model that can handle both settings. Additionally, we can also use GSS-enhanced multi-channel data for training, although this is optional. (We found during the CHiME-6 challenge that it helps significantly for overlapped speech.)
I'm working on the Tedlium conformer_ctc2
recipe.
I am working on a Japanese CSJ recipe. So far I have managed a working lang_char model using the conv_emformer_transducer_stateless2 setup, yielding the preliminary results below at 28 epochs.
dataset | CER |
---|---|
eval1 | 5.67 |
eval2 | 4.2 |
eval3 | 4.4 |
In the spirit of pythonising the recipe, I have rewritten the bash and perl data preparation scripts from kaldi's recipe. However, this yielded a somewhat different transcript than Kaldi, so my results are not directly comparable with espnet and kaldi.
I will send in a pull request once a version comparable to espnet and kaldi is up.
I am working on a Japanese CSJ recipe. So far I have managed a working lang_char model using the conv_emformer_transducer_stateless2 setup, yielding the preliminary results below at 28 epochs.
dataset CER eval1 5.67 eval2 4.2 eval3 4.4 In the spirit of pythonising the recipe, I have rewritten the bash and perl data preparation scripts from kaldi's recipe. However, this yielded a somewhat different transcript than Kaldi, so my results are not directly comparable with espnet and kaldi.
I will send in a pull request once a version comparable to espnet and kaldi is up.
Thanks!
AMI recipe is now available: https://github.com/k2-fsa/icefall/pull/698
MGB2 is also available: https://github.com/k2-fsa/icefall/pull/396
ASR recipes often require some form of corpus-specific text normalization. We are trying to make such normalizations available in the manifest preparation stage in Lhotse (e.g., see AMI, CHiME-6, AliMeeting recipes in Lhotse). The specific implementations are done in the lhotse.recipes.utils and called using an additional normalize_text
argument in the prepare
function. If you are working on an ASR recipe for a dataset that requires some specific text normalization, please consider adding this functionality in the Lhotse recipe so that people using Lhotse outside of icefall may also benefit from it.
AliMeeting multi-condition training recipe is merged: https://github.com/k2-fsa/icefall/pull/751