Version 2.0 of the speech characteristics function processes multi-speaker JSONs, allowing user-selected speaker analysis. Outputs are now segmented by word, phrase, turn, and overall file. Refer to speech transcription cloud v1.0 on how to acquire labeled transcripts.
Eye blink rate v1.0
The new eye blink rate function allows for precise quantification of both basic blink rates and blink characteristics from videos of an individual.
Speaker separation cloud v1.0
For improved scalability, we’ve isolated speaker separation based on pre-labeled multi-speaker JSONs into its own function. The existing speaker separation v1.1 function will be meant to work on JSONs without speaker labels.
Version 2.0 of the speech characteristics function processes multi-speaker JSONs, allowing user-selected speaker analysis. Outputs are now segmented by word, phrase, turn, and overall file. Refer to speech transcription cloud v1.0 on how to acquire labeled transcripts.
The new eye blink rate function allows for precise quantification of both basic blink rates and blink characteristics from videos of an individual.
For improved scalability, we’ve isolated speaker separation based on pre-labeled multi-speaker JSONs into its own function. The existing speaker separation v1.1 function will be meant to work on JSONs without speaker labels.