el-tocino / localcroft

Bits for locally-served Mycroft instances
https://mycroft.ai
35 stars 10 forks source link

Question about wav content #4

Open emphasize opened 4 years ago

emphasize commented 4 years ago

Hi el-tocino,

I'm struggling a bit to find a german dataset to speed up the process of finding fake words.

There are some sets, but almost exclusively spoken sentences (half-sentences). Some are short, but i'm not certain that this even qualifies to be training material. Is precise-train-incremental restricted to spoken words?

el-tocino commented 4 years ago

You can train precise for recognizing sneezes, actually, if so inclined.

Using sox you can trim longer clips down, based on silence between words. Aim for 3s or less per clip. Then dump them in the nww folders as appropriate. It's still better to try and use false activation words and noises where possible. Random speech will help to an extent, but you also want to fine-tune this to be as accurate to both the wake word and discerning against not-wake-word as possible.

emphasize commented 4 years ago

Thanks, that's not meant to be a supplement. More an addition to the word finder methods you suggest.

Mozillas common voice dataset is an exceptable source then. Sadly not words, but short sentences with 6 or less words. And a hefty amount of data that's at least somewhat "peer-reviewed".

Do you recommend some ambient sound sources besides the tuxfamily.org suggestion?

-- Short additional question: What is meant by the batch -b option flag of precise-train?

Cheers, Swen

el-tocino commented 4 years ago

Precise community data has a not wake word section including some noises. The google speech commands dataset is an ideal addition to not wake words (though it's large, and will significantly increase training time). Recording ambient noise is pretty easy with a cell phone as well.

Batch size is useful for making a wider pass of data for each epoch, I tend to use pretty large sizes (5000?), some experimentation would be useful.

Latest Common Voice now has a large subset of single word entries.

emphasize commented 4 years ago

Google Research has a lot of different language datasets (Nepali, who would guess that), but unfortunatly no german one. Or do you suggesting that languages itself play a lesser role? GSC v.2 is already downloaded, but then i realized: there's not much spoken english around here ;)

I think i will train them in a Raspbian VirtualMachine, if that's possible. Or turn to Windows completely for that process. My pi buddies already sweatin'.

el-tocino commented 4 years ago

The language isn't as important as phonemes and pattern of words.

I'd train on a desktop rather than a pi with that volume of data. ;)

emphasize commented 4 years ago

After reviewing the common voice dataset more closely i think i'm pressed to trim down parts

based on silence between words

Do you mind sharing some useful sox commands?

Cheers

emphasize commented 4 years ago

I have a proposition myself.

https://d-rhyme.de/worte-verdrehen/

In general it's more for our german audience, but this particular section "twists words" in a way that the middle part of the name/word is replaced by random syllable(s?)/letters (word length is constant) - and therefor language agnostic

Let's say the wake word is "Samira". he spits out Salisa, Savita, Saliga, Sakita, ...

In my understanding that should be a great addition to the wordfinder/rhyme methods given by your howto.

el-tocino commented 4 years ago

Try it and see?

Google sox silence, i don't have it handy and it'll explain the parameters better.