In the mean time Apple's voice recognizer works only for the same language user set in their app (e.g. Voice is in russian and app language must be in russian). We need to setup a pipeline that will try to recognize the language on a short part of the voice message and use these probabilities to make an actual transcription.
One of the tricky things there is that Apple's Language recognition works only in a single thread, meaning we need sort of a queue to try some popular languages and then pick one of them.
In the mean time Apple's voice recognizer works only for the same language user set in their app (e.g. Voice is in russian and app language must be in russian). We need to setup a pipeline that will try to recognize the language on a short part of the voice message and use these probabilities to make an actual transcription.
One of the tricky things there is that Apple's Language recognition works only in a single thread, meaning we need sort of a queue to try some popular languages and then pick one of them.