Open hoch opened 4 months ago
Teleconf:
onerror
.closed
.More thoughts:
onerror
is the common pattern from the Web APIonerror
via the task scheduler. (async)
context.state === 'closed'
)We can remove the following section:
Note: It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.
Add onerror
to AudioContext's IDL.
Add the description of onerror
:
onerror
is a type EventHandler
. An ErrorEvent will be dispatched if the underlying audio device fails to activate.I propose suspending the AudioContext
instead of closing it. Developers will still be able to close it if they want.
There are 2 major scenarios that an error may occur:
AudioContext
doesn't exist.Falling back to the default sink ID is also a possibility. So here is the list of proposals by far:
AudioContext
; and/orTeleconference 3/7:
This change is only relevant to the device selection via the AudioContext constructor:
onerror
will be dispatched.AudioContext.onerror can also be dispatched:
The following spec text needs to be removed: (Under https://webaudio.github.io/web-audio-api/#dom-audiocontext-audiocontext)
Note: It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.
This means that the creation of AudioContext can't be failed. It will succeed no matter what, but invalid config options (e.g. invalid sinkId) will put the context in a suspended status with an error message via the onerror event handler.
Some expected changes:
partial interface AudioContext : BaseAudioContext {
attribute EventHandler onerror;
}
Removal of notes: "It is unfortunately not possible to programatically notify authors..."
Handling the system resource failure in handling a control message from the constructor:
Attempt to acquire system resources to use a following audio output device based on [[sink ID]] for rendering: ... In case of failure, abort the following steps.
Here we need to queue a task to dispatch an event to error
at the AudioContext.
WDYT? @padenot @chrisguttandin @mjwilson-google @gsinafirooz
I think the acquire system resources section is part of the setSinkId() method, correct?
So we would also need additional text in:
onerror
should fire if the existing sink suddenly becomes unavailable (scenario 2 in https://github.com/WebAudio/web-audio-api/issues/2567#issuecomment-1984120388).I think the acquire system resources section is part of the setSinkId() method, correct?
No, the location I meant was the second half of the AudioContext constructor.
Somewhere to note that onerror should fire if the existing sink suddenly becomes unavailable
I think we have two options:
The second one is my preference, but would love to hear what others think.
No, the location I meant was the second half of the AudioContext constructor.
I see it now. Thank you.
We probably should be explicit that onError is for device / sink problems, and not other types of errors.
- Add a check within the rendering loop to monitor device status and dispatch an appropriate event.
This would be in https://webaudio.github.io/web-audio-api/#rendering-loop, correct? It would be a more explicit specification, which should increase consistency across implementers but will give less flexibility. The rendering loop is one of the more performance-critical sections, so I would prefer to not put anything in it that's not strictly necessary.
- Add a clarifying note to the onerror description, indicating that the system might trigger it in case of sink failure while the renderer is running.
This would give implementers more flexibility. However, I'm worried that if we underspecify it we could have interoperability problems. Could we write it as a normative SHOULD?
As long as there's defined ordering between onerror
being called and e.g. AudioWorkletProcessor
having its process
method called. We can spec that no process
can happen after onerror
has been triggered from the backend (not fired on the main thread).
Based on the discussion so far, onerror
should triggers suspension. It will perform this algorithm:
https://webaudio.github.io/web-audio-api/#dom-audiocontext-suspend
The rendering loop algorithm 4.1 already checks the rendering state flag: https://webaudio.github.io/web-audio-api/#rendering-loop
Re padenot@'s thought:
onerror
and statechange
(from WebAudio render thread)process
won't get called anymore because the loop will stop due to the rendering state flagOne thing I am not so sure is that 1) might not called from the WebAudio's render thread. It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.
It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.
Indeed, it can also be both the thread on which the real-time callbacks are called, or some other non-realtime thread on the same platform (sometimes it differs depending on the version or settings). Generally we see the following breakdown:
In all cases Gecko doesn't assume anything about the thread and immediately dispatches an event to a known thread it has, for uniform handling.
One thing I am not so sure is that 1) might not called from the WebAudio's render thread. It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.
Chromium will dispatch the event to WebAudio rendering thread or the main thread. WebAudio never interacts with platform audio threads directly.
We can ensure that the event always arrives on the main thread if it's preferred.
@padenot Thanks for looking into that. That confirms my suspicions.
@o1ka Thanks for jumping in! The discussion above is not about AudioContext.onerror
dispatching. It is about the system interruption to the WebAudio render thread. (e.g. RWADI::OnRenderError)
In Chromium, I believe this is called from AOD::NotifyRenderCallbackOfError() and this is the browser's I/O thread - please correct me if I am mistaken - not the WebAudio render thread. Hence the question: how do we describe this with the current spec text (without introducing new concepts/terms) when we don't define browser's I/O thread anywhere?
@hoch thanks for pointing to that one. So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures). As I said earlier, the implementation can ensure all these events are eventually delivered on the main thread - if it's convenient.
Why do we need to describe the behavior in terms of threads? Isn't it an implementation detail?
As I see it: the error can occur during construction, during setSinkId() processing, or during rendering. The sources of error may be: invalid sink id (by mistake, or a sink is removed, or no permissions) or device malfunction/system malfunction (ex: CoreAudio crash). In any case (% some exceptions for backward compatibility) if the rendering has already started, it will stopped (is this what you refer to as the system interruption to the WebAudio render thread?) and the error event will be dispatched. The system can guarantee that processing/rendering is stopped by the time the error event fired. That's all the observed behavior. Isn't it enough? Am I missing something?
So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures).
Why do we need to describe the behavior in terms of threads? Isn't it an implementation detail?
That's because unlike other audio-related APIs on the web, the Web Audio API spec exposes the notion of the control thread and the render thread. The problem is that we want to stop the render loop by setting render thread flag
before anything else, so we don't continue the subsequent iteration of the render call. We have to add spec text for this behavior.
(This is what @padenot meant by "We can spec that no process can happen after onerror has been triggered from the backend", which I agree)
The system can guarantee that processing/rendering is stopped by the time the error event fired. That's all the observed behavior. Isn't it enough? Am I missing something?
I agree that it is something obvious, but we still need to explain how that's guaranteed.
One thing we have not discussed so far is the interaction between onerror
and suspend()
. When onerror
gets fired in the middle of the rendering loop, the algorithm of suspend()
will also be executed. The suspend()
algorithm has tasks for both threads (control, render) and also changes render thread flag
. I have not looked into this in detail yet...
So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures).
- The main thread: constructor
And also setSinkId() processing
- The audio render thread: ??? - I am not sure how this can interrupt itself?
I don't quire understand the question - what does interrupt mean here?
(This is what @padenot meant by "We can spec that no process can happen after onerror has been triggered from the backend", which I agree)
Yes, this makes sense. And also this seems to be enough? How error events are shuffled between internal threads inside, let's say, chromium, and what additional threads (IO or other) are involved are implementation details. For example, IO thread is involved when AudioContext starts/stops rendering - this is not surfaced anywhere.
I don't quire understand the question - what does interrupt mean here?
For example, can the audio render thread (AudioOutputDevice in Chromium) can invoke an error notification (e.g. onRenderError()
in Chromium) directly? That sounds... a bit weird.
Yes, this makes sense. And also this seems to be enough? How error events are shuffled between internal threads inside, let's say, chromium, and what additional threads (IO or other) are involved are implementation details. For example, IO thread is involved when AudioContext starts/stops rendering - this is not surfaced anywhere.
If you look into the Web Audio API specification in details, you'll see we acknowledge the existence of the audio render thread and some algorithms are executed on that thread. So I kinda agree with you that we can get away with the "implementation detail" argument for some parts, but there are other parts we need to specify clearly. I won't know for sure until I start writing an actual PR. (which is my next step)
There were several requests in the past to expose an event handler for device-related error reporting.
Currently AudioContext does not expose this type of error; it just silently fails and falls back to browser-specific default. The spec says:
One proposal would be having
onrendererror
under AudioContext. This can be used for cases like: