Open chcunningham opened 3 years ago
It's true that hardware audio exists, but are there cases where an app would need control over whether it was used? In general I would expect that if a DSP exists, it would be used, and there wouldn't be a need for apps to override that. (Unlike with video I don't expect there to be a robustness gap between hardware and software.)
Not necessarily, the most common example is the mp3 decoder that is present on the overwhelming majority of mobile phones running Android: the system decoder (MediaCodec
) is extremely power efficient, but also extremely slow (at a small multiple of the playback speed).
When playing back music or other pre-recorded media, this means that power consumption is minimal, and the device's battery, with the screen turned off, lasts for ages, decoding large chunks of mp3 and using very deep buffers for playback.
All browser engine that I know of, when compiled for Android, are now using ffmpeg
to decode mp3 when using AudioContext.decodeAudioData
. This software decoder is inherently non-real-time, and the decoding speed matters a lot, for example to reduce the loading time for video games.
Triage note: marking extension
as this is just adding a dict-member and the default behavior (allow) would match the current impl.
Mobile devices frequently have dsp chips for audio.
Use the same enum we have for VideoDecoderConfig, just add it to AudioDecoderConfig https://w3c.github.io/webcodecs/#hardware-acceleration