Closed hoch closed 7 years ago
I believe each AudioContext
should have one AudioWorkletGlobalScope
. This conflicts with the case of other Worklet variants, because they use a single WorkletGlobalScope for a frame.
Or we can be bold and use a single AudioWorkletGlobalScope
for all the AudioContext in the same frame. Let developers do the inter-context communication!!
AudioNode
s, as they are today, don't really communicate in an observable way with each other. However, some state is shared between them (mainly large buffers, for exampe, wave tables), to optimize memory and CPU usage.
Considering one of the design goal for AudioWorklet
is to be able to reimplement the native nodes, we should make it so that it's possible to implement those optimizations.
It would be acceptable to be able to share a read-only ArrayBuffer
slice ? I don't know what the DOM concept for this exists or has been implemented yet ? Something like this: https://gist.github.com/dherman/5401735.
AudioBuffer
is not associated with any BaseAudioContext
. So sharing an AudioBuffer object between AudioWorkletNode
s created by different audio contexts should not be an issue. Or do you mean that AudioWorkletGlobalScope
can be shared by multiple AudioContext
s? Or do we want to be able to share ArrayBuffer
s through the WorkletGlobalScope
?
Before taking a look into the link you posted, I wanted to clarify your intention first.
I meant that we should ensure that big buffers can be shared between AudioWorkletNode
s. This should ideally be supported for things other than AudioBuffer
.
For now, if postMessage
works the same to a worklet than it does to a worker, an ArrayBuffer
that is sent through postMessage
is detached, and can't be sent again, iirc.
Yes, I wanted to discuss that in TPAC F2F: Do we really want to support the complete postMessage
mechanism like Worker?
I am not 100% sure about this. After the discussion with our architecture engineers, Worklet infra will be very similar to Worker with the full postMessage
support. We might need something lighter than postMessage
. So I am envisioning a new method like sendEvent
with signle argument of structured cloned (serialized) data.
Of course the best solution for us audio people would be to be able to share read-only data, but again, I'm not sure if this is supported or not, and if it is, I have no idea of the specifics.
Resolved by one-scope-per-worklet and forthcoming changes to property exposure of processors and removal of postMessage support
Whoops, I didn't mean to close this! It needs a PR to document it.
Note: I don't think we need to say this yet, but if we ever support multiple audio rendering threads within an AudioContext, then we would need to create multiple AWGSs in an AudioContext, one per each thread, to avoid race conditions.
@joeberkovitz the idea of multi-threading renderer sounds intriguing, but not definitely for V1. :)
As I stated in the previous meeting, this is my proposal:
@hoch +1 to almost all that you said. The multithreading issue is a potential consequence for V2, that's all. I was just calling it out to make it visible.
However, I am not sure that the name to processor definition map can belong to the AudioWorklet
object. I think it must be a separate map for each BAC/AWGS pair, because it is registerProcessor()
establishes the map's contents, and registerProcessor()
is only exposed by an AWGS.
We talked about this at the F2F but I think perhaps you were not on that part of the call. I think the best way to document the AWP/AWGS correspondence is to explain that each imported script is executed against each new AWGS as part of its creation process. This results in a distinct registerProcessor()
invocation for a given AWP class, within each AWGS -- and that, in turn, is what establishes the AWP/AWGS relationship.
However, I am not sure that the name to processor definition map can belong to the AudioWorklet object. I think it must be a separate map for each BAC/AWGS pair, because it is registerProcessor() establishes the map's contents, and registerProcessor() is only exposed by an AWGS.
This is acceptable when audioWorklet.import()
updates all the maps owned by each BAC/AWGS pair in the same frame. If this is not what you think, then it means BACs cannot share this map thus developers have to call import()
again for other BACs to register the same processor.
Furthermore, if a map belongs to a pair of BAC/AWGS we might want to go ahead and move audioWorklet
object under the context, not the window. (This was @padenot's proposal btw.)
Whichever our decision is, this significantly affects the fundamental of AudioWorklet system.
This is acceptable when audioWorklet.import() updates all the maps owned by each BAC/AWGS pair in the same frame.
Yes, that is what I think.
Furthermore, if a map belongs to a pair of BAC/AWGS we might want to go ahead and move audioWorklet object under the context, not the window.
I'd say not, because the audioWorklet is responsible for maintaining a list of all prior imported scripts that must be applied not only to all existing AWGSs but also in the future to any newly created AWGS, in order to avoid having to call import() multiple times.
@padenot proposed this behavior at the F2F and it made sense to me.
Yes, that is what I think.
It seems redundant, but I am okay with this. Would be great to have other WG member's opinion as well.
The number of potential
AudioWorkletGlobalScope
instances that can be running at the same time doesn't seem to be spelled out anywhere. The example code at the front of the section makes me think that it could be any of:...although I assume it's the first?
The issued is raised by @slightlyoff at #869.