I'm using Azure Speech to Text SDK version 1.21 in Golang. Here I'm adding a feature of Auto-detect language for the audio file using
engLangConfing, err := speech.NewSourceLanguageConfigFromLanguageAndEndpointId("en-us", "endpoint-id")
// error handling
sourceLanguageConfigs := []*speech.SourceLanguageConfig{engLangConfing}
langs = []string{"uk-ua", "ja-jp", "hi-in", "pt-pt"}
for _, lang := range langs {
conf, err := speech.NewSourceLanguageConfigFromLanguage(lang)
if err != nil {
log.Error(err.Error())
continue
}
sourceLanguageConfigs = append(sourceLanguageConfigs, conf)
}
langConfig,err := speech.NewAutoDetectSourceLanguageConfigFromLanguageConfigs(sourceLanguageConfigs)
// error handling
recognizer,err := speech.NewSpeechRecognizerFomAutoDetectSourceLangConfig(speechConfig, langConfig, audioConfig)
Here, I'm giving a slice of Source Language Configurations for multiple languages including a configuration for a Custom Model with an endpoint.
This gives a wrong result. It returns transcribed text using the last element of the sourceLanguageConfigs Slice. It's not identifying the correct language. I've checked it multiple times by reordering the elements of the slice and still, I'm getting the transcribed text in the last source language config only.
I'm using Azure Speech to Text SDK version 1.21 in Golang. Here I'm adding a feature of Auto-detect language for the audio file using
Here, I'm giving a slice of Source Language Configurations for multiple languages including a configuration for a Custom Model with an endpoint.
This gives a wrong result. It returns transcribed text using the last element of the
sourceLanguageConfigs
Slice. It's not identifying the correct language. I've checked it multiple times by reordering the elements of the slice and still, I'm getting the transcribed text in the last source language config only.Do I have to add any property?
Why is it happening, can anyone help with it?