Open egorky opened 4 months ago
Same issue with me, any thoughts?
It seems that once the first transcript received from the API the call disconnected.
No, It seems to me, you cannot recognize the end of the sentence or the silence with this protocol, you just have to wait til it timeouts. I had to change to UniMRCP and everything worked fine
Hi, everything is working fine. I got the whole project working so I can call the extension and say a sentence and then I got the response from Google with the exact text I said. No problem there. BUT, it only lets me speak for around 10 seconds. Even when the sentence is like 5 seconds long, I still have to wait 5 seconds more for the response to arrive. Is this by design? Is this project not able to detect silence? Is it a limitation with AEAP or is some kind of configuration I have to do I havent found yet? Any help will be appreciated.