alphacep / vosk-api

Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
Apache License 2.0
7.58k stars 1.06k forks source link

Setup fine-tuning script for the models #185

Open nshmyrev opened 4 years ago

nshmyrev commented 4 years ago

As in

https://github.com/daanzu/kaldi-active-grammar/issues/33

https://github.com/gooofy/zamia-speech/issues/106

dpny518 commented 3 years ago

can you provide the data/loca/dict for this model http://alphacephei.com/vosk/models/vosk-model-small-en-us-0.3.zip i'll help you write the script that downloads the dict and this model and fine tunes the data/train folder and outputs vosk-model-small-en-us-new

federico-zb commented 3 years ago

Could you also provide it for "vosk-model-small-es-0.3"? Thank you very much. I'm trying to fine-tune it and after that, I'll document the process.

nshmyrev commented 3 years ago

Better implementation here:

https://github.com/aarora8/kaldi2/blob/opensat_oct2020/egs/OpenSAT2020/s5/local/chain/run_finetune_tl.sh

lalimili6 commented 3 years ago

@nshmyrev I think this is a good and new example. what your opinion? https://github.com/kaldi-asr/kaldi/blob/master/egs/libri_css/s5_mono/local/chain/tuning/run_tdnn_1d_ft.sh

Ashutosh1995 commented 3 years ago

@nshmyrev I am trying to adapt a trained model on Indian English accent to wake word data. I have set up the dataset as per the KALDI format. I am not able to understand how should I change my paths and give the dataset and model paths in it.

Ashutosh1995 commented 3 years ago

@nshmyrev could you please provide data/lang, data/local/lang, chain tree-dir for Indian English vosk zip folder ?

nshmyrev commented 3 years ago

More straigthforward gist:

https://gist.github.com/daanzu/d29e18abb9e21ccf1cddc8c3e28054ff#file-run_finetune_tdnn_1a_daanzu-sh

Ashutosh1995 commented 3 years ago

@nshmyrev can you please provide the necessary files needed for finetuning in daanzu's script for the Indian english accent model

nshmyrev commented 3 years ago

Another useful link

https://github.com/zhaoyi2/CVTE_chain_model_finetune

plefebvre91 commented 3 years ago

Hi :)

is it planned to complete the documentation on acoustic model finetuning (here: https://alphacephei.com/vosk/adaptation) ? The procedure seems for now very unclear... For example:

LuggerMan commented 3 years ago

Hi again, Everybody is asking for input files to finetune. When will they be released?

P.S. I don`t quite understand. Since help needed since august, but no upload of files you surely currently have.

Archan2607 commented 3 years ago

Hi I am also working on the fine-tuning part on indian english vosk model. Can anyone please quide me with the information of preparing a proper documentation or the steps to follow?

Also, @Ashutosh1995 i read one of your threads on this issue, have you got any success with that. Can you please discuss?

Thanks

LuggerMan commented 2 years ago

So, by #773

you need lats with nnet3/align_lats.sh

align_lats.sh takes feats.scp as input, where could i find that?

nshmyrev commented 2 years ago

align_lats.sh takes feats.scp as input, where could i find that?

Feats are created with make_mfcc.sh from the data folder with wav.scp/segments

LuggerMan commented 2 years ago

@nshmyrev so i basically need to extract feats from the data on which the model was trained, am i right?

nshmyrev commented 2 years ago

@nshmyrev so i basically need to extract feats from the data on which the model was trained, am i right?

From adaptation data, you do not need training data

LuggerMan commented 2 years ago

From adaptation data, you do not need training data

Ah, ok, now i get it! Thank you

nabil6391 commented 2 years ago

@nshmyrev by any chance is any video tutorial to fine tune kaldi or vosk models. It would be great. Thanks

vikraman22 commented 2 years ago

Hi @nshmyrev I'm trying to finetune the us-english model. It requires vosk-model-en-us-0.22-compile/exp/finetune_ali directory to consist of final.mdl, ali.*.gz and tree file. I have got these file for the data with which I'm trying to finetune. But the data previously used to train the model is not available to pubic from alphacep.

I got this data from Kaldi model for which I was trying to train from scratch from kaldi/egs/mini_librispeech/s5/exp/mono directory. Can I actually use the files from this directory or can I use files from other directories such as tri3b, tri2b etc ? Note: I used the same data to train and also using the same to finetune the US-English model.

Whether the data used while training model is also required? Also final.mdl is initially only available in ./exp/chain/tdnn/final.mdl. Can I use the same for _./exp/nnet3/tdnn_sp/ and ./exp/finetune_ali/_ directories

Ashutosh1995 commented 2 years ago

@Archan2607 apologies for the late reply but I was temporarily involved in the ASR training and couldn't work out the training part completely.

nshmyrev commented 2 years ago

@vikraman22 please take a note that we do not have official finetuning tutorial, so that has to be trial and error path.

For trial and error you'd better ask one question a time and try to solve simple question yourself, there no need to ask me to do simple things.

Your chances to get help increase if you submit a documentation on finetuning and finetuning setup as a pull request to our codebase just like we have part on training.

Ratevandr commented 2 years ago

Hi! I am trying to finetune the model vosk-model-ru-0.22. I use "run_finetune_tdnn_1a_daanzu.sh" script for this, and I am missing files ali.*.gz. How can I generate them? I tried using "steps/nnet3/align.sh" script, but got error ERROR (apply-cmvn[5.5.1009~1-e4940]:Value():util/kaldi-table-inl.h:164) Failed to load object from /home/shmyrev/kaldi/egs/ac/vosk-model-ru-0.22-compile/mfcc/raw_mfcc_test_sova_devices.1.ark:41 (to suppress this error, add the permissive (p, ) option to the rspecifier.

nshmyrev commented 2 years ago

How can I generate them?

With steps/nnet3/align.sh

but got error

There must be earlier error since feature files are missing

qp450 commented 6 months ago

Hello! I am also trying to run daanzu's finetuning script to finetune the German model vosk-model-de-0.21 and am looking for the ali.*.gz files. I had a look at steps/nnet3/align.sh, as suggested in the previous response, but if I understand correctly, that script requires the data-dir - as in data/train - to run, which is not present in the downloaded model. Could you provide the ali.*.gz files or indicate which directory to use as the data-dir? Thank you very much in advance!

nshmyrev commented 6 months ago

indicate which directory to use as the data-dir

The one with your audio samples you going to use for fine-tuning

qp450 commented 6 months ago

Thank you for your reply! I read in this comment of the finetuning discussion, that if the alignment files are generated from the very small amount of finetuning data, as opposed to the large amount of training data, they might be of far inferior quality. This dependency seems to have been confirmed in daanzu's reply, who then provided the alignment files for the english model. This is why I thought that the initial alignment files are necessary.

nshmyrev commented 6 months ago

they might be of far inferior quality.

No, it is wrong. The alignment is just timestamps of phonemes. It doesn't depend on amount of data.

You do not need original model alignment to finetune.

qp450 commented 6 months ago

Ok great, thanks very much for your help!