I plan to run auditok on-device to detect pauses in speech. Currently, it is being handled as an API call (front-end streams the audio via a microphone and sends it to a back-end API which does the segmentation on a real-time basis).
Is it possible to convert it into a Tensorflow lite model sorts for on-device inference, rather than an API call?
I plan to run
auditok
on-device to detect pauses in speech. Currently, it is being handled as an API call (front-end streams the audio via a microphone and sends it to a back-end API which does the segmentation on a real-time basis).Is it possible to convert it into a Tensorflow lite model sorts for on-device inference, rather than an API call?