/speak and /speakt rely on AWS comprehend API to detect the language before generating the voice.
However, a very common use case is to learn pronunciations where a user might send one word. This would break for both false friends and cognates and the user might end up learning the wrong pronunciation. We can emit meta information (Like the language name with the audio file - there is an API to send voice with caption).
Action items:
[x] Meta information about the language to be emitted with speak command
Also show textual representation of the translation if /speakt is used. Otherwise when using /speakt xyz you have to request the translation for xyz manually.
/speak
and/speakt
rely on AWS comprehend API to detect the language before generating the voice.However, a very common use case is to learn pronunciations where a user might send one word. This would break for both false friends and cognates and the user might end up learning the wrong pronunciation. We can emit meta information (Like the language name with the audio file - there is an API to send voice with caption).
Action items: