WebAudio / web-audio-api

The Web Audio API v1.0, developed by the W3C Audio WG
https://webaudio.github.io/web-audio-api/
Other
1.04k stars 165 forks source link

Device-related error reporting via AudioContext.onerror #2567

Open hoch opened 4 months ago

hoch commented 4 months ago

There were several requests in the past to expose an event handler for device-related error reporting.

Currently AudioContext does not expose this type of error; it just silently fails and falls back to browser-specific default. The spec says:

Note: It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.

One proposal would be having onrendererror under AudioContext. This can be used for cases like:

  1. Selecting device via the context constructor options fails.
  2. The current device gets lost for some reasons. (e.g. losing a bluetooth connection)
  3. The system does not have any audio device available.
hoch commented 4 months ago

Teleconf:

  1. The general pattern on the web API is to use onerror.
  2. The context state will be closed.

More thoughts:

  1. Look into what MediaElement does for error handling.
  2. For privacy aspect, look into how WebRTC APIs are handling device failure.
hoch commented 4 months ago

More details on the teleconf discussion 02/08:

The expected spec change:

  1. We can remove the following section:

    Note: It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.

  2. Add onerror to AudioContext's IDL.

  3. Add the description of onerror:

    • onerror is a type EventHandler. An ErrorEvent will be dispatched if the underlying audio device fails to activate.
gsinafirooz commented 4 months ago

I propose suspending the AudioContext instead of closing it. Developers will still be able to close it if they want.

gsinafirooz commented 3 months ago

There are 2 major scenarios that an error may occur:

  1. The sink ID provided to the constructor of AudioContext doesn't exist.
  2. The active sink ID goes non-existent (gets physically unplugged) while rendering.

Falling back to the default sink ID is also a possibility. So here is the list of proposals by far:

  1. Suspend the AudioContext; and/or
  2. Fallback to the default sink ID
hoch commented 3 months ago

Teleconference 3/7:

This change is only relevant to the device selection via the AudioContext constructor:

AudioContext.onerror can also be dispatched:

The following spec text needs to be removed: (Under https://webaudio.github.io/web-audio-api/#dom-audiocontext-audiocontext)

Note: It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.

This means that the creation of AudioContext can't be failed. It will succeed no matter what, but invalid config options (e.g. invalid sinkId) will put the context in a suspended status with an error message via the onerror event handler.

hoch commented 3 months ago

Some expected changes:

  1. IDL: https://webaudio.github.io/web-audio-api/#AudioContext:
partial interface AudioContext : BaseAudioContext {
  attribute EventHandler onerror;
}
  1. Removal of notes: "It is unfortunately not possible to programatically notify authors..."

  2. Handling the system resource failure in handling a control message from the constructor:

Attempt to acquire system resources to use a following audio output device based on [[sink ID]] for rendering: ... In case of failure, abort the following steps.

Here we need to queue a task to dispatch an event to error at the AudioContext.

WDYT? @padenot @chrisguttandin @mjwilson-google @gsinafirooz

mjwilson-google commented 3 months ago

I think the acquire system resources section is part of the setSinkId() method, correct?

So we would also need additional text in:

hoch commented 3 months ago

I think the acquire system resources section is part of the setSinkId() method, correct?

No, the location I meant was the second half of the AudioContext constructor.

Somewhere to note that onerror should fire if the existing sink suddenly becomes unavailable

I think we have two options:

  1. Add a check within the rendering loop to monitor device status and dispatch an appropriate event.
  2. Add a clarifying note to the onerror description, indicating that the system might trigger it in case of sink failure while the renderer is running.

The second one is my preference, but would love to hear what others think.

mjwilson-google commented 3 months ago

No, the location I meant was the second half of the AudioContext constructor.

I see it now. Thank you.

We probably should be explicit that onError is for device / sink problems, and not other types of errors.

  1. Add a check within the rendering loop to monitor device status and dispatch an appropriate event.

This would be in https://webaudio.github.io/web-audio-api/#rendering-loop, correct? It would be a more explicit specification, which should increase consistency across implementers but will give less flexibility. The rendering loop is one of the more performance-critical sections, so I would prefer to not put anything in it that's not strictly necessary.

  1. Add a clarifying note to the onerror description, indicating that the system might trigger it in case of sink failure while the renderer is running.

This would give implementers more flexibility. However, I'm worried that if we underspecify it we could have interoperability problems. Could we write it as a normative SHOULD?

padenot commented 3 months ago

As long as there's defined ordering between onerror being called and e.g. AudioWorkletProcessor having its process method called. We can spec that no process can happen after onerror has been triggered from the backend (not fired on the main thread).

hoch commented 3 months ago

Based on the discussion so far, onerror should triggers suspension. It will perform this algorithm: https://webaudio.github.io/web-audio-api/#dom-audiocontext-suspend

The rendering loop algorithm 4.1 already checks the rendering state flag: https://webaudio.github.io/web-audio-api/#rendering-loop

Re padenot@'s thought:

  1. An interruption from the underlying audio device happens (WebAudio render thread?)
  2. Upon this interruption, change the rendering state flag and queue a task to fire onerror and statechange (from WebAudio render thread)
  3. AWP's process won't get called anymore because the loop will stop due to the rendering state flag

One thing I am not so sure is that 1) might not called from the WebAudio's render thread. It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.

padenot commented 3 months ago

It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.

Indeed, it can also be both the thread on which the real-time callbacks are called, or some other non-realtime thread on the same platform (sometimes it differs depending on the version or settings). Generally we see the following breakdown:

In all cases Gecko doesn't assume anything about the thread and immediately dispatches an event to a known thread it has, for uniform handling.

o1ka commented 3 months ago

One thing I am not so sure is that 1) might not called from the WebAudio's render thread. It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.

Chromium will dispatch the event to WebAudio rendering thread or the main thread. WebAudio never interacts with platform audio threads directly.

We can ensure that the event always arrives on the main thread if it's preferred.

hoch commented 3 months ago

@padenot Thanks for looking into that. That confirms my suspicions.

@o1ka Thanks for jumping in! The discussion above is not about AudioContext.onerror dispatching. It is about the system interruption to the WebAudio render thread. (e.g. RWADI::OnRenderError)

In Chromium, I believe this is called from AOD::NotifyRenderCallbackOfError() and this is the browser's I/O thread - please correct me if I am mistaken - not the WebAudio render thread. Hence the question: how do we describe this with the current spec text (without introducing new concepts/terms) when we don't define browser's I/O thread anywhere?

o1ka commented 3 months ago

@hoch thanks for pointing to that one. So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures). As I said earlier, the implementation can ensure all these events are eventually delivered on the main thread - if it's convenient.

Why do we need to describe the behavior in terms of threads? Isn't it an implementation detail?

As I see it: the error can occur during construction, during setSinkId() processing, or during rendering. The sources of error may be: invalid sink id (by mistake, or a sink is removed, or no permissions) or device malfunction/system malfunction (ex: CoreAudio crash). In any case (% some exceptions for backward compatibility) if the rendering has already started, it will stopped (is this what you refer to as the system interruption to the WebAudio render thread?) and the error event will be dispatched. The system can guarantee that processing/rendering is stopped by the time the error event fired. That's all the observed behavior. Isn't it enough? Am I missing something?

hoch commented 3 months ago

So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures).

Why do we need to describe the behavior in terms of threads? Isn't it an implementation detail?

That's because unlike other audio-related APIs on the web, the Web Audio API spec exposes the notion of the control thread and the render thread. The problem is that we want to stop the render loop by setting render thread flag before anything else, so we don't continue the subsequent iteration of the render call. We have to add spec text for this behavior.

(This is what @padenot meant by "We can spec that no process can happen after onerror has been triggered from the backend", which I agree)

The system can guarantee that processing/rendering is stopped by the time the error event fired. That's all the observed behavior. Isn't it enough? Am I missing something?

I agree that it is something obvious, but we still need to explain how that's guaranteed.

One thing we have not discussed so far is the interaction between onerror and suspend(). When onerror gets fired in the middle of the rendering loop, the algorithm of suspend() will also be executed. The suspend() algorithm has tasks for both threads (control, render) and also changes render thread flag. I have not looked into this in detail yet...

o1ka commented 3 months ago

So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures).

  • The main thread: constructor

And also setSinkId() processing

  • The audio render thread: ??? - I am not sure how this can interrupt itself?

I don't quire understand the question - what does interrupt mean here?

(This is what @padenot meant by "We can spec that no process can happen after onerror has been triggered from the backend", which I agree)

Yes, this makes sense. And also this seems to be enough? How error events are shuffled between internal threads inside, let's say, chromium, and what additional threads (IO or other) are involved are implementation details. For example, IO thread is involved when AudioContext starts/stops rendering - this is not surfaced anywhere.

hoch commented 3 months ago

I don't quire understand the question - what does interrupt mean here?

For example, can the audio render thread (AudioOutputDevice in Chromium) can invoke an error notification (e.g. onRenderError() in Chromium) directly? That sounds... a bit weird.

Yes, this makes sense. And also this seems to be enough? How error events are shuffled between internal threads inside, let's say, chromium, and what additional threads (IO or other) are involved are implementation details. For example, IO thread is involved when AudioContext starts/stops rendering - this is not surfaced anywhere.

If you look into the Web Audio API specification in details, you'll see we acknowledge the existence of the audio render thread and some algorithms are executed on that thread. So I kinda agree with you that we can get away with the "implementation detail" argument for some parts, but there are other parts we need to specify clearly. I won't know for sure until I start writing an actual PR. (which is my next step)