WICG / speech-api

Web Speech API
https://wicg.github.io/speech-api/
144 stars 30 forks source link

Support SpeechRecognition input from audio files and Float32Array and ArrayBuffer #70

Open guest271314 opened 4 years ago

guest271314 commented 4 years ago

Support .wav, .webm, .ogg, .mp3 files (file types supported by the implementation decoders) and Float32Array and ArrayBuffer input to SpeechRecognition.

Use cases for static audio file and ArrayBuffer (non-"real-time") input to SpeechRecognition, includem but are not limited to:

AudioWorkletNode can be used to stream Float32Array input.

Related https://github.com/w3c/speech-api/issues/66

Pehrsons commented 4 years ago

There are already several means of getting from audio files and buffers to audio MediaStreamTracks. Most of your example use cases are solvable by #66 and #69.

The only thing this proposal would solve compared to those proposals is that it could process audio faster than real-time, i.e., faster than it'd take to play them out.

Personally I think that particular problem is better solved by integrating with something like WebCodecs if/when it becomes mature and available.

guest271314 commented 4 years ago

@Pehrsons How exactly would WebCodecs solve the problem of processing audio (or video) faster than "real-time" from a static file? WebCodecs appears to be based more on bring-your-own-codec than an all-encompossing API intent on being an adapter for all possible audio and video use cases.

Internally the STT engine, unless specifically designed for MediaStreamTrack input, would need to convert the audio stream to one of the representations of the file listed at this issue, in general, a WAV file.

It is not clear how either https://github.com/w3c/speech-api/issues/66 or https://github.com/w3c/speech-api/issues/69 solve the use cases in this issue without converting a file or buffer to MediaStreamTrack instead of simply using the file or buffer as input?

Pehrsons commented 4 years ago

@Pehrsons How exactly would WebCodecs solve the problem of processing audio (or video) faster than "real-time" from a static file? WebCodecs appears to be based more on bring-your-own-codec than an all-encompossing API intent on being an adapter for all possible audio and video use cases.

WebCodecs has not settled yet so I cannot say, but it's the only ongoing effort I'm aware of that would allow processing media data in non-realtime and be passed around. There's OfflineAudioContext, but it doesn't really pipe into things. With WebCodecs it sounds like you'd get a ReadableStream of DecodedAudioPacket, which could be an input to SpeechRecognition, for instance.

Internally the STT engine, unless specifically designed for MediaStreamTrack input, would need to convert the audio stream to one of the representations of the file listed at this issue, in general, a WAV file.

To analyze any audio data you have to decode it first so that seems reasonable. When UAs ship with STT engines that are local, it wouldn't make sense to hand them an encoded file.

It is not clear how either https://github.com/w3c/speech-api/issues/66 or https://github.com/w3c/speech-api/issues/69 solve the use cases in this issue without converting a file or buffer to MediaStreamTrack instead of simply using the file or buffer as input?

Of course they'd solve it by decoding the file or buffer into a MediaStreamTrack. A fine solution as different tools are good at different things.