Closed AASHISHAG closed 3 years ago
Hi,
Thanks for your comment. Unfortunately, it does not have a feature to put boundaries between phonemes, but I think it can be implemented in the following way depending on your situation.
1) If you have the transcription for the audio: You can transform your transcription to the (correct) phoneme sequence, then match the recognized output with it using edit distance. The shortest path will show you the boundary of each word. This should be very simple to implement.
2) If you do not have transcription for the audio: Then recognizing word boundaries is very similar to a normal speech recognition task because you need to know the underlying word. There are several ways to do it. If the vocabulary is limited, you can create a search graph (lattice) and search over it with the output phonemes. If you do not know the vocabulary, the you probably need to rely on some WFST or neural network decoders, which are not very easy to implement.
Thanks!
Thank you for your suggestions.
I don't have transcripts and was wondering, if there is an easy way to convert these continuous phoneme sequences to text?
Do you think that option 2 would work in my case?
I think there is no easy way to convert phoneme seq to text because there are lots of ambiguities in the phoneme outputs. You can implement a search graph with a pronounciation lexicon, but it might not give you very good results (because the outputs typically do not perfectly match with your lexicon)
If you want good word boundaries, I suggest you take some existing good speech recognition models to recognize, then convert words outputs to phonemes afterwords. That might be easier to obtain the high quality boundary than this tool.
I was actually looking to somehow use allosaurus for ASR task.
Given the recognized phonemes from audio, I want to predict the true transcriptions. But I think that would be challenging due to ambiguities in the phoneme outputs, as you mentioned above.
In case you have some ideas, I would love to discuss. Otherwise, I can close this ticket.
Yeah, if you have some training set, probably you can use this tool to transcribe all audios and then train another seq2seq model to map phoneme to your transcription. This probably is the easiest way.
Thank you for the suggestions. I am closing the ticket.
Hi,
Thank you for putting up the code open-source.
I have a question, is it somehow possible to add phoneme boundaries for each word recognized.
For example:
Transcript for a wav file (german): schau mal hin ist das dorf noch nicht zu sehen Phonemes Recognized: | ʃ a ʊ h m a l h ɪ n ɪ s t d a s d ɔ ə f n ɔ x n ɪ x t s u z e h ə n Phonemes with word boundaries: | ʃ a ʊ h m a l h ɪ n ɪ s t d a s d ɔ ə f n ɔ x n ɪ x t s u z e h ə n
Not sure if I am missing something.
Thank you.