Closed jays0606 closed 6 months ago
i found a solution.
setting enforceRealtime = False
makes the result much faster.
the result is fetched almost real-time.
Hey @jays0606 , you're right, enforceRealtime
should always be False
unless you're load testing. A couple of other tips to reduce latency:
dg_interim_results
to true
to see non-final results, which will come at a faster cadence (every 1 second instead of every 3-5 seconds).dg_callback
, instead of passing an https
endpoint and receiving the transcripts in POST requests, you can pass a wss
endpoint and receive transcripts over a websocket instead. This will likely improve latency by a small amount.Thanks for the reply.
Also, is there a way i can receive only FromCustomer audio input ?? Currently it is by default FromCustomer, ToCustomer 2 channels. Is only receiving FromCustomer possible? Will this also improve latency?
Thank you @RyanChimienti
=> this i've also accomplished.
Just modify code to receive on FromCustomer audio, and change the deepgram api queryString to
queryString.append("encoding=linear16&sample_rate=8000");
I want the fastest possible stt result. Any ideas to optimize ?
@RyanChimienti