alphacep / vosk-api

Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
Apache License 2.0
8.2k stars 1.13k forks source link

Training custom models using Vosk #41

Open KatPro opened 4 years ago

KatPro commented 4 years ago

Hello! Is it possible to train our own custom models like these: https://github.com/alphacep/kaldi-android-demo/releases using Vosk? What steps shouls we take after the index database is filled with data? Thank you!

nshmyrev commented 4 years ago

@KatPro models are trained with Kaldi. Follow standard kaldi training scripts, for example, mini_librispeech example.

KatPro commented 4 years ago

Thank you! And is it possible to train the model for another language following Kaldi training scripts?

nshmyrev commented 4 years ago

Reopen to increase visibility

nshmyrev commented 4 years ago

Documentation about process https://github.com/alphacep/vosk-api/blob/master/doc/models.md#training-your-own-model

nyroDev commented 4 years ago

Hi @nshmyrev, Will it be possible to have a "simple" script that take simple input folder with wav and csv files to do all the work to create a model?

nshmyrev commented 4 years ago

Will it be possible to have a "simple" script that take simple input folder with wav and csv files to do all the work to create a model?

Sure, it is called mini_librispeech recipe. It is in kaldi/egs/mini_librispeech/s5/run.sh

swentel commented 4 years ago

First of all: thanks for the Android library! I'm testing it in https://github.com/swentel/solfidola, and it actually works pretty great!

The way I use it in my app is as voice commands. Some words trigger an action. So I was wondering whether I could have a model which only consists of a few words. I basically only need to recognize words like : 'one', 'two', 'three' and 'play'. I don't care about other words as they don't trigger anything in the app.

I'm currently installing kaldi (make is compiling hehe), and then going to try and figure out if I can create a model with only a couple of words.

But I wonder: does this idea sense, and will the model size in the end be smaller? I'd rather don't want to ship 30mb for only a few words to recognize.

I'll write down steps if I can figure out myself, but any more detailed steps to create such a model would be awesome, but no worries if that's hard to write down in a few lines :)

nshmyrev commented 4 years ago

@swentel you can just rebuild the graph, see

https://github.com/alphacep/vosk-api/blob/master/doc/adaptation.md

You can also select words in runtime, see

https://github.com/alphacep/vosk-api/blob/master/python/example/test_words.py

let me know if you have further questions

nshmyrev commented 4 years ago

@swentel also see https://github.com/alphacep/vosk-api/issues/55

swentel commented 4 years ago

Oooh, great, thanks for the quick answer!

I'll get cracking at it after dinner. This will be awesome if it works, and I'm going to write a blog post about it, because the world needs to know about this :)

nshmyrev commented 4 years ago

Thank you @swentel, let me know how it goes!

swentel commented 4 years ago

So this actually seemed to work!

Based on the adaption readme, both commands are running, although I'm not a 100% sure what the first command does (fstsymbols ...)

However, when running the second command with the text file with my custom words in, Gr.fst is now only 2.6MB (compared to 23MB) , completely reinstalled the app again on my phone and it still works. Saved 20Mb, that's great!

So looking in the model directory, I still see a couple of files which are 'relatively' large:

I was wondering: can I do something with those too? Or even better, are they even needed for the recognizer to work? (To be honest, I could have tested that myself of course already by deploying a new version and leaving those files out)

(I'm almost sorry for what I guess are newbie questions, completely new to kaldi, but super excited it works!)

nshmyrev commented 4 years ago

Or even better, are they even needed for the recognizer to work? (To be honest, I could have tested that myself of course already by deploying a new version and leaving those files out)

Those files are still needed.

swentel commented 4 years ago

Ok, cool, thanks!

swentel commented 4 years ago

Published a blog post at https://realize.be/blog/offline-speech-text-trigger-custom-commands-android-kaldi-and-vosk

In case I made some stupid mistakes, do let me know ;)

nshmyrev commented 4 years ago

@swentel amazing, thanks a lot!

nshmyrev commented 3 years ago

Related #314

dazzzed commented 3 years ago

How do we structure the words.txt file for adaptation?

trying with

covid-19 coronavirus

in my words.txt file I get:

SymbolTable::ReadText: Bad non-negative integer "coronavirus"

plehal commented 2 years ago

The command mentioned here to create a new language model does exist in default compile of kaldi. egs directory iis empty in default compile.

jipinhetundu commented 2 years ago

Will it be possible to have a "simple" script that take simple input folder with wav and csv files to do all the work to create a model?

Sure, it is called mini_librispeech recipe. It is in kaldi/egs/mini_librispeech/s5/run.sh

I tried 1. used the mini_librispeech recipe and generated some files,and 2. tried to arrange the files as part of “Model Structure” in https://alphacephei.com/vosk/models.

But I have generated many files with the same name after the first step, like for ‘final.mdl’, I have exp/mono/final.mdl, exp/tri2b/final.mdl, exp/tri1/final.mdl, etc. I don't know which file should I put into the final structure. Any suggestions?

nshmyrev commented 2 years ago

We actually have our new recipe:

https://github.com/alphacep/vosk-api/tree/master/training

trained model is in exp/chain/tdnn.

jipinhetundu commented 2 years ago

We actually have our new recipe:

https://github.com/alphacep/vosk-api/tree/master/training

trained model is in exp/chain/tdnn.

Thanks a lot for your answer!

I followed your steps and tried running the new recipe, but ran into a small problem at line 28 of run.sh. it reminds me that this is not the correct usage

_local/prepare_dict.sh data/local/lm data/local/dict Usage: local/preparedict.sh [options] \\ \ ............

I observed the writing of the corresponding part of _minilibrispeech/s5/run.sh. This file is written as _local/prepare_dict.sh --stage 3 --nj 30 --cmd "$train_cmd" \ data/local/lm data/local/lm data/local/dict_nosp_

So I changed the corresponding part to

  1. local/prepare_dict.sh data/local/lm data/local/lm data/local/dict
  2. local/prepare_dict.sh --stage 3 --nj 30 data/local/lm data/local/lm data/local/dict
  3. local/prepare_dict.sh --stage 3 --nj 30 --cmd "$train_cmd" data/local/lm data/local/lm data/local/dict

1 and 2 correspond to different outputs, while 3 reports an error. I don't know much about kaldi and I'm not sure if it's due to different versions. I updated the latest version of kaldi three days ago. I want to know what I should do next?

nshmyrev commented 2 years ago

Usage: local/prepare_dict.sh [options]

Seems like you are not using local/prepare_dict.sh from our recipe, you should have old file. Our one doesn't have any options like in the message:

https://github.com/alphacep/vosk-api/blob/master/training/local/prepare_dict.sh

jipinhetundu commented 2 years ago

好像你没有使用我们食谱中的 local/prepare_dict.sh,你应该有旧文件。我们的没有消息中的任何选项:

https://github.com/alphacep/vosk-api/blob/master/training/local/prepare_dict.sh

Looks like I asked a stupid question. Thank you for answering patiently, I have successfully run it!

Manikandan18M commented 2 years ago

Also @nshmyrev what are the changes should I make to produce models of high accuracy if I'm training a model from scratch? You have suggested to train ivector of dim 40 to save memory but does this affect accuracy?

It will also helpful if you could the share the directories to look for in order to build the final_result model compatible for vosk that has files such as final.mdl, final.ie, conf etc...

Manikandan18M commented 2 years ago

@nshmyrev

nshmyrev commented 2 years ago

Also @nshmyrev what are the changes should I make to produce models of high accuracy if I'm training a model from scratch?

It depends on too many factors - domain of speech, amount of audio, amount of GPUs. It is hard to guess.

ankur995 commented 1 year ago

Documentation about process https://github.com/alphacep/vosk-api/blob/master/doc/models.md#training-your-own-model

not able to open this url

nshmyrev commented 1 year ago

@ankur995 yes, it is obsolete. Our training setup is here:

https://github.com/alphacep/vosk-api/tree/master/training

There is colab:

https://github.com/alphacep/vosk-api/blob/master/python/example/colab/vosk-training.ipynb