WebAudio / web-audio-api

The Web Audio API v1.0, developed by the W3C Audio WG
https://webaudio.github.io/web-audio-api/
Other
1.04k stars 166 forks source link

Audio Workers #113

Closed olivierthereaux closed 7 years ago

olivierthereaux commented 10 years ago

Originally reported on W3C Bugzilla ISSUE-17415 Tue, 05 Jun 2012 12:43:20 GMT Reported by Michael[tm] Smith Assigned to

Audio-ISSUE-107 (JSWorkers): JavaScriptAudioNode processing in workers [Web Audio API]

http://www.w3.org/2011/audio/track/issues/107

Raised by: Marcus Geelnard On product: Web Audio API

https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#JavaScriptAudioNode

It has been discussed before (see [1] and [2], for instance), but I could not find an issue for it, so here goes:

The JavaScriptAudioNode should do its processing in a separate context (e.g. a worker) rather than in the main thread/context. It could potentially mean very low overhead for JavaScript-based audio processing, and seems to be a fundamental requirement for making the JavaScriptAudioNode really useful.

[1] http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0225.html [2] http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0245.html

sebpiq commented 9 years ago

Thanks @joeberkovitz !

Ok ... I understand if we are talking about a simple graph ... but what about a graph with 200++ nodes (a fair part of them being custom - AudioWorker - nodes) versus the same dsp graph implemented in a single AudioWorker node and heavily optimized (and possibly asm.js)

joeberkovitz commented 9 years ago

The handoff to/from an AudioWorker is not a thread handoff, it should be very cheap as it just passes buffers to and from the worker. In Chrome at least, I believe these buffers are recycled for efficiency. And of course you could use asm.js and the like even in single-purpose workers, if such optimizations are available.

So my belief is that it would still be a win to use separate nodes — I did assume you were talking about quite a substantial graph. I would think that you would have to have a truly enormous graph composed entirely of nodes that are unavailable in a native version, before you could get a win from that approach. But of course, you should test this out yourself once an AudioWorker implementation is available.

Chris Wilson will no doubt also have an opinion, and of course there are other implementations besides Chrome so perhaps Paul will chime in here.

On Oct 20, 2014, at 9:47 AM, Sebastien Piquemal notifications@github.com wrote:

Thanks @joeberkovitz !

Ok ... I understand if we are talking about a simple graph ... but what about a graph with 200++ nodes (a fair part of them being custom - AudioWorker - nodes) versus the same dsp graph implemented in a single AudioWorker node and heavily optimized (and possibly asm.js)

— Reply to this email directly or view it on GitHub.

. . . . . ...Joe

Joe Berkovitz President

Noteflight LLC Boston, Mass. phone: +1 978 314 6271 www.noteflight.com "Your music, everywhere"

padenot commented 9 years ago

On Mon, Oct 20, 2014, at 04:19 PM, Joe Berkovitz wrote:

The handoff to/from an AudioWorker is not a thread handoff, it should be very cheap as it just passes buffers to and from the worker. In Chrome at least, I believe these buffers are recycled for efficiency. And of course you could use asm.js and the like even in single-purpose workers, if such optimizations are available.

I think we don't recycle, because allocations is not really a bottleneck for us. If it becomes one, it'll get optimized, of course.

Even passing buffers accross a worker is cheap, because we implement zero-copy transfers.

So my belief is that it would still be a win to use separate nodes — I did assume you were talking about quite a substantial graph. I would think that you would have to have a truly enormous graph composed entirely of nodes that are unavailable in a native version, before you could get a win from that approach. But of course, you should test this out yourself once an AudioWorker implementation is available.

I think it really depends on the use case. Surely something like convolution is best done using a native node, but sometimes you need the flexibility of custom code. As always, measurement is a must, as we can't really predict performance. I've personnaly seen crazy things with asm.js, in particular.

cwilso commented 9 years ago

The handoff between native nodes and audio workers should be relatively inexpensive; as Joe says, it's just passing buffers, and NOT across a thread boundary.

Paul said:

I think we don't recycle, because allocations is not really a bottleneck for us. If it becomes one, it'll get optimized, of course.

It's not a question of allocations being a bottleneck - it's a question of not triggering the GC, for more predictable performance in the audio thread.

So yes, still worthwhile to leverage the native nodes. However, if you're doing custom processing and can optimize out a few steps, you might want to do so - but I'd encourage you to test to see, because native is native, and even optimized JS (and asm.js, etc) will run slower than optimized native code.

If you're building your own routing system inside an Audio Worker, though, you're probably doing it wrong.

Chris Wilson will no doubt also have an opinion...

I feel like I should get this as a tattoo. :)

sebpiq commented 9 years ago

If you're building your own routing system inside an Audio Worker, though, you're probably doing it wrong.

@cwilso the rationale for building my own routing system inside an AudioWorker was that I am still trying to develop WebPd, Pure Data having an architecture that is surprisingly hard to fit into Web Audio API's. Especially things like messages and event scheduling. Considering that, and considering that ScriptProcessorNode wasn't really a valid option, I opted for the DIY approach.

Now with AudioWorker things are different, as there isn't anymore performance and latency problems. But still, some aspects of this are so hard to deal with that I'd prefer the DIY approach if it wouldn't mean a lower performance...

cwilso commented 9 years ago

...I'd prefer the DIY approach if it wouldn't mean a lower performance...

Well, that IS the motivation behind the Audio Worker. :)

joeberkovitz commented 9 years ago

Sebastien,

It would be great if you could email your thoughts about what is hard to deal with in the API (other than its lack of architectural compatibility with Pd). Please address your comments on the spec to the public-audio@w3.org list rather than this bug thread on github, so that the WG as a whole can read it.

Best,

. . . . . ...Joe

Joe Berkovitz President

Noteflight LLC Boston, Mass. phone: +1 978 314 6271 www.noteflight.com "Your music, everywhere"

On Oct 20, 2014, at 11:55 AM, Sebastien Piquemal notifications@github.com wrote:

If you're building your own routing system inside an Audio Worker, though, you're probably doing it wrong.

@cwilso the rationale for building my own routing system inside an AudioWorker was that I am still trying to develop WebPd, Pure Data having an architecture that is surprisingly hard to fit into Web Audio API's. Especially things like messages and event scheduling. Considering that, and considering that ScriptProcessorNode wasn't really a valid option, I opted for the DIY approach.

Now with AudioWorker things are different, as there isn't anymore performance and latency problems. But still, some aspects of the web audio API are IMHO so hard to deal with that I'd prefer the DIY approach if it wouldn't mean a lower performance...

— Reply to this email directly or view it on GitHub.

sebpiq commented 9 years ago

@joeberkovitz I've already complained sooo much :D I'm probably already looking like a really annoying person ... I am rather nice in the real life.

But yeah, I'd be happy to send a mail, and sorry for the noise on those tickets.

notthetup commented 9 years ago

@sebpiq Not annoying at all. I am really enjoying this discussion. I feel this exactly the kind of use cases we should discuss at this point of time. Looking forward to WebPd.

joeberkovitz commented 9 years ago

Please see #85 for a discussion of playbackTime/currentTime relationship that is relevant to this issue.

ghost commented 9 years ago

Hi guys,

This is a small concern I have regarding the addParameter and removeParameter methods.

The following has been pulled from one of Chris's G+ posts

I'm not sure if the addParameter and removeParameter methods are pointing in the right direction. I believe it would make more sense if the audio worker script created the parameters it needed, and those parameters were exposed (read-only) via a dynamic property on the worker node, e.g. worker.params.frequencyReduction

Currently whenever an audio worker node is created the user's script (not the audio worker script) has to create the required parameters, and create them correctly, as seen in the bitcruncher example. I'm sure you can envisage the various problems that this approach could cause :)

joeberkovitz commented 9 years ago

@si-robertson I have a couple of thoughts on this.

First, the current proposal in 113 does indeed call for parameters to exposed on a dynamic property named "parameters" belonging to the node. So that's not an issue.

Second, the set of params are not really dynamic for any native node: they are known up front, statically. So to achieve functionality on a par with native nodes, params for an AudioWorkerNode can be defined in the audio worker script, or in the main thread script that constructs the node. The node belongs to the main thread in a sense, so its not an a-priori broken idea to define its visible parameters in the main thread rather than inside the Worker.

Finally, in order to build a kind of "shrink-wrapped" AudioWorkerNode that exhibits a native-style API, it's going to generally be useful to have a library with constructor-like functions that build specific kinds of customized node for you and attaches functions, attributes, etc. to the AudioWorkerNodes to implement their functionality, e.g. by sending and receiving messages. This is a logical place to put the addition of custom parameters, as part of the node's setup.

joeberkovitz commented 9 years ago

@cwilso what is an use case for removeParameter()?

ghost commented 9 years ago

Thanks @joeberkovitz

So to achieve functionality on a par with native nodes, params for an AudioWorkerNode can be defined in the audio worker script, or in the main thread script that constructs the node.

That solves the problem, but could you provide a quick example demonstrating how the audio worker script defines parameters for itself, if you have the time. I can't see any parameter related functions in the AudioWorkerGlobalScope interface.

cwilso commented 9 years ago

@si-robertson @joeberkovitz A few thoughts: 1) this may well be the right way to go. The thought did cross my mind as I wrote out the samples. I'll try it out. We'll need an initialization event then. 2) removeParameter is probably unnecessary. I think I added it for symmetry, but I don't think it's necessary.

joeberkovitz commented 9 years ago

Sounds good. When I said "can be defined in the audio worker script, or in the main thread script", I just meant that the AudioWorker specification could choose to go in either direction. Currently Chris has chosen to put this functionality in the main thread script only, but it sounds like he's reconsidering.

@cwilso Moving parameter definition into the worker script might also have downstream implications for the way we handle other kinds of AudioWorkerNode configuration, e.g. the number-of-channels problem (which perhaps is a bit more dynamic than parameter setup). I'm not saying the worker script is a bad place to put this per se. Just that it would be nice to have as much consistency as possible in the way that worker nodes get set up.

One advantage of having the worker script set this up is that it does a bit more "shrink wrapping" of the node by encapsulating more of its definition in the worker script. That still leaves the main thread with the need for message-passing wrappers, etc., but perhaps those will be less common.

cwilso commented 9 years ago

Note that #378 explicitly covers the "need to rethink inputs, outputs and channels".

ghost commented 9 years ago

Just throwing a specification tweak idea your way while I'm here. Feel free to ignore it :)

interface AudioWorkerNode : AudioNode {
    void terminate ();
    void postMessage (any message, optional sequence<Transferable> transfer);

    attribute EventHandler onmessage;
    readonly attribute AudioWorkerParameters parameters;
};

The parameters attribute exposes any parameters defined by the audio worker script.

The addParameter and removeParameter functions have been removed.

interface AudioWorkerParameters {
    ???
};

I wasn't sure how to spec this correctly, it's essentially a dynamic object that exposes the parameters defined by the audio worked script (as mentioned above). The AudioWorkerParameters attributes/properties would be readonly.

interface AudioWorkerGlobalScope : DedicatedWorkerGlobalScope {
    AudioParam defineParameter (DOMString name, optional float defaultValue);

    attribute EventHandler onconstruct;
    attribute EventHandler onaudioprocess;
    readonly attribute float sampleRate;
};

The onconstruct handler is called when the audio worker script has been loaded. It allows the audio worker to define its parameters. A standard Event object (type="construct") is passed to the handler.

The defineParameter function pretty much does what the addParameter does/did. This function can only be called during a construct event, at any other time it will cause an error to be thrown.

Here's a quick example of use.

worker = audioContext.createAudioWorker("squisher.js", 1, 1)
worker.parameters.squishLevel.value = 0.5

// squisher.js

var squishLevel

onconstruct = function(e) {
    squishLevel = defineParameter("squishLevel", 0.2)
}

onaudioprocess = function(e) {
    ....
}

It might also be worth considering returning a Promise from the createAudioWorker function if the audio worker script has to be loaded asynchronously.

joeberkovitz commented 9 years ago

@cwilso Here's something significant that seems to have fallen off the radar (it did come up on the list), and which also has an impact on this question of where parameters get defined.

So: AudioContext.createAudioWorker() takes a scriptURL, of course. At what point is this script considered to be loaded, relative to the point at which createAudioWorker() returns to its invoker?

If it loads synchronously, then the node can be configured and deployed in a graph immediately. For example, node.addParameter('foo') can then be called immediately after the node is returned, followed by references to node.parameters.foo. Awesome. Except that the main thread might block for a good bit while the script loads. Rather less awesome.

If the script loads asynchronously, then the node is useless until some point in the future when the script loads, but at least the main thread doesn't just squat there. If we don't modify the API further, though, the main thread can't do anything dependent on the script's having executed, until such time as the node posts some message back indicated that it started up. Furthermore, if parameters get added inside the worker script a la your recent suggestion above, attempts to initialize these parameters will presumably fail until that script has loaded.

Overall I think this question of script loading synchronicity is a bigger issue than the way parameters get added. addParameter seems to highlight an interesting facet of the synchronicity question, though.

joeberkovitz commented 9 years ago

Wow, Si's message crossed with mine, but we seem to be worrying about some of the same things.

Si's onconstruct is kind of what I presume Chris was talking about when he said "oninitialize".

cwilso commented 9 years ago

To be clear, it's not acceptable to block the main thread while initialization happens in the audio thread. So if we go the everything-configured-inside-the-worker-thread route, the createAudioWorker will return a Promise, that gets resolved once the Params are constructed. Unfortunately, this means it's not possible to construct PRECISELY the native node behaviors with Audio Workers (due to the native nodes having synchronous construction semantics).

This is, by the way, precisely why I did the Param configuration in the main thread - knowing, of course, that this meant shipping a custom node would require shipping two .js files (one with the param setup constructor, and one with the actual worker implementation) rather than one (worker impl with node setup in initialization handler).

ghost commented 9 years ago

Given the choice, I would personally head down the safer and more convenient everything-in-worker route even if that meant losing some functionality. Being able to process audio in a worker is the big selling point of this API, and I'm sure that will be enough for most of the devs out there.

jussi-kalliokoski commented 9 years ago

would require shipping two .js files

Not really. You probably want a build step that crams the worker source code into a Blob URL for library consumer convenience as well as faster spawn time.

Honestly I think it's a pretty horrible idea to make creating the worker return a promise or not be ready to use right away. Completely breaks the use case of real-time audio response.

ghost commented 9 years ago

Honestly I think it's a pretty horrible idea to make creating the worker return a promise or not be ready to use right away. Completely breaks the use case of real-time audio response.

The modern web is pushing hard towards asynchronous APIs in the main thread for a good reason, and the synchronous loading of audio worker scripts is likely to block the main thread. Using a promise won't break anything, it would simply be one step in an application's loading phase in the same way loading JavaScript files using the async attribute is, and loading critical data via XHR is.

ghost commented 9 years ago

Just a FYI here: The Web MIDI API also returns a Promise when MIDI access is requested.

notthetup commented 9 years ago

If the script is loaded by XHR I can understand why it needs to be async, but sync creation of AudioNodes is something I rely on quite a bit. One of the use cases requires me to create multiple AudioNodes (usually Oscillator/BufferNodes) in real time based on the user interaction. For a similar application with AudioWorkers, having to preload AudioWorkers is not possible.

jussi-kalliokoski commented 9 years ago

synchronous loading of audio worker scripts is likely to block the main thread

I'm not suggesting synchronous loading, just that the interface is ready to control (send commands, set params) right away, like it is in the current proposal. That requires no blocking of the main thread, other than the few cycles spent in instantiation you'd have with native nodes as well. The node just receives the commands whenever it's ready, just like with normal workers.

The Web MIDI API also returns a Promise when MIDI access is requested.

That is a completely different use case. MIDI access is requested once, when you start the application, after that everything (excluding opening ports, which may or may not be instant) is retained mode real time. AudioNodes will be often created in response to user actions, with an expectation of immediate feedback.

ghost commented 9 years ago

AudioNodes will be often created in response to user actions, with an expectation of immediate feedback.

That's something I overlooked. However, loading audio workers with the use of promises while an application is running wouldn't be impossible to do. I'm doing something similar at the moment with a prototype Reason-like device rack, the devices are loaded on demand, and the behaviour of loading an audio worker wouldn't be any different really.

If the current audio worker spec remains the same the problems surrounding the definition of audio worker parameters remains the same, and I really think that side of things needs to change.

joeberkovitz commented 9 years ago

@cwilso If we use a Promise, then it creates a problem in that there is no way to factor out a separate, up-front asset-loading phase of some audio application from the immediate, on-demand creation of AudioWorkerNodes. This is a pretty big issue, not just from the architectural layering perspective. When you need to put some node into a graph, you have a short time window in which to add it. Asynchronicity with an unknown network delay is not going to work in that situation. I suppose one's script could be somehow preloaded into a data URL, but this feels pretty awkward.

What about this approach, which separates script loading from node construction, and also permits arbitrary numbers of nodes to be created later, synchronously:

   var nodeFactory;
   function factoryLoaded() {...}

   // This Promise succeeds after the JS is both loaded and parsed
   context.createAudioWorkerFactory('bitcrusher.js', ...)
       .then(function(factory) {nodeFactory = factory; factoryLoaded();});

   // later on in the application, one can do this at will (for any number of instances):
   var bitcrusherNode = nodeFactory.createAudioWorker();
ghost commented 9 years ago

To be honest, I don't see why the following would cause a problem.

context.createAudioWorker("bitcrusher.js").then(connect)

function connect(audioWorker) {
    // Insert the audio worker between two existing, connected nodes.
    nodeA.disconnect()
    nodeA.connect(audioWorker)
    audioWorker.connect(nodeB)
}

The use of a promise doesn't cause a problem here. What would cause a problem is a lack of audio graph management within an application, but that's beyond the scope of the Web Audio API.

rtoy commented 9 years ago

I don't think webaudio has ever guaranteed that creation of nodes works almost instantly. For example, in chrome, the HRTF panner takes a significant amount of time to load the database and silence is output until the database is loaded. (Because loading happens in a different thread so as not to block the main thread during loading.)

The same might be true for oscillators which take time to create the necessary band-limited tables. However, I think this is done synchronously so the main thread is blocked until the tables are created.

jussi-kalliokoski commented 9 years ago

However, loading audio workers with the use of promises while an application is running wouldn't be impossible to do.

Of course it's not, it's just highly inconvenient and unnecessary.

I suppose one's script could be somehow preloaded into a data URL, but this feels pretty awkward.

It's not too awkward compared to the benefit (e.g. removing network latency unpredictability and removing the need for the consumer to care about where the worker script is located):

var source = "(" + function () {
    this.onaudioprocess = function (event) {
        // something something
    };
} + ".call(this));"

var blob = new Blob([source], { type: "application/javascript" });
var url = URL.createObjectURL(blob);

var worker = context.createAudioWorker(url);

What about this approach, which separates script loading from node construction, and also permits arbitrary numbers of nodes to be created later, synchronously:

This sounds a bit like what I proposed some time ago: https://github.com/WebAudio/web-audio-api/issues/113#issuecomment-42127725 :)

jussi-kalliokoski commented 9 years ago

This sounds a bit like what I proposed some time ago: #113 (comment) :)

Except of course that I went a bit further by proposing that the whole script is executed in the preload phase. :P

cwilso commented 9 years ago

There are two things that happen as the node is created (keep in mind the node is created in the main thread):

1) AudioParams on the Node object become available in the main thread - e.g. if you were reimplementing GainNode as an AudioWorker, you'd want to do something like:

var gain = createMyGainNode();  // under the hood, does context.createAudioWorker( "gainnode.js" );
gain.gain.value = 0.5;

2) The node itself actually starts processing audio in the audio thread. This CANNOT happen synchronously - the audio thread needs to load the script and get it running, and even communicat.

There was a suggestion in the thread that we should move AudioParam creation into the audio worker's script (that is, the AudioParams would be created from inside the worker script instead of from the main thread), because then you can have a clean encapsulation inside that script; however, the problem is that it breaks the first scenario. You cannot, if the AudioParams are being created from the worker script, have them created from inside the worker script without having WorkerNode creation from the main thread be asynchronous (since the worker has to download and fire up the script, and then get an async cross-thread message back: it's not acceptable to block the main thread while initialization happens in the audio thread); at the very least, the script above would need to be:

var gain=null;
createMyGainNode().then( function (gainNode) {
    gain = gainNode;
    gain.gain.value = 0.5;
} );

Bleah. And, worse yet, if you follow this mechanism it means you can't replicate the behavior of native node types - because they instantly have their AudioParams available.

Conclusion: Keeping in mind your custom code can encapsulate the worker scripts, I think it's better if we just presume that "component" scripts will be main-thread scripts that package up (as blob, e.g.) the worker script, and then encapsulate a factory method or object constructor.

E.g.:

There's a separate issue about whether the main thread should get notification (an event, maybe) when the worker starts actually processing audio or not. However, the worker script _could_ post a message back on first audioprocess event if it wanted to do this notification, so my inclination is not to do this as a blanket "init event" fired back to the main script worker node.
sheerun commented 9 years ago

It would be great if AudioBuffer implemented Transferable interface.

As far as I understand without without this interface it is copied instead transferred between workers.

joeberkovitz commented 9 years ago

@sheerun From a data structure point of view AudioBuffer is basically a thin wrapper around a set of typed arrays -- which already implement Transferable. So making AudioBuffer transferable may not create a ton of additional benefit, although it feels convenient.

Moreover transferring an AudioBuffer's ownership out of a thread might have side effects for any AudioBufferSourceNode using that same buffer, which needs to be thought through.

joeberkovitz commented 9 years ago

I'm not especially arguing for AudioBuffers to be Transferable (especially not in v1), but to be clear: I think there is a reasonable use case for transferring audio buffers to an AudioWorker, so that they can be employed by a node's sample processing code within the audio thread.

Any AudioWorkerNode that performs some custom function based on some audio data (say a custom convolver or a specialized buffer-playback node) would need to get hold of that data in the audio thread via a postMessage() from somewhere else, most likely from the main thread since that is where nodes get configured. An AudioBuffer is certainly a good way to package such data -- this is why AudioBuffer is part of ConvolverNode's and AudioBufferSourceNode's API.

The only data transfer problem that AudioWorkers "solve" is the problem that was previously created by having ScriptProcessorNodes run in the main thread. With ScriptProcessors, configuring a node's state requires no data transfer, but running samples through it does (which is a bad idea, since processing samples requires very low latency). With AudioWorkers, running samples through a node requires no data transfer, but configuring it does (and configuration can handle more latency than live processing).

sheerun commented 9 years ago

I could live with AudioBuffer not being transferrable, because I can copy its buffers to worker (sometimes I can't transfer them, because I get "An ArrayBuffer is neutered and could not be cloned." after second attempt), but it's simply not possible to even instantiate AudioBuffer in Web Worker for audio processing.. Any tips?

By the way, it's a bummer developers can't define custom transferrable types. I hope there's good reason

joeberkovitz commented 9 years ago

@sheerun There is no need to instantiate an AudioBuffer in a Web Worker, given that there is no access to AudioBufferSourceNodes or ConvolverNodes in workers. You can simply work directly with the arrays of data (and the small number of descriptive properties) that the AudioBuffer would maintain.

ghost commented 9 years ago

Two quick questions for you guys ...

  1. Does anyone know where I can follow Chrome/Chromium's implementation progress of Audio Workers? I have searched crbug.com but failed to find anything relevant.
  2. Has a decision been reached regarding transferrable AudioBuffers? IMHO, this feature is something that we really need to see implemented. If, for example, you created an Audio Worker that was designed to be a simple audio mixer/player for a game (therefore avoiding the need to create and connect thousands of AudioSourceBufferNodes) then that Audio Worker will need access to the loaded/decoded AudioBuffers, and those obviously need to be transferred in from the main thread.

Thanks.

joeberkovitz commented 9 years ago

@creative-monkey The AudioWorker proposal is still being finalized. There are outstanding issues with it, particularly with respect to multiple inputs/outputs, asynchronous aspects of node creation and Worker/Node cardinality. These have been minuted along the way. @cwilso is going to provide an updated proposal to the group as soon as it's ready. I'll let Chris comment on implementation progress but since he has said that the proposal is incomplete I would be surprised if it's being built prior to the acceptance of a completed spec.

I will make sure that the transferrable AudioBuffer question is discussed when the proposal comes back to the table. However, as we've noted in this issue thread, an AudioBuffer is a thin wrapper around a bunch of arrays and scalars which are themselves already transferrable. Transferrable AudioBuffers seem like something that makes sense, but if the WG doesn't wind up adopting this, your use case is still easily supported by transferring the contents of an AudioBuffer wrapped in a plain old Javascript Object. A custom mixer/player AudioWorker isn't going to be able create any nodes that would require use of a typed AudioBuffer anyway. It just needs the data.

sheerun commented 9 years ago

@joeberkovitz Sure, I can manually transfer AudioBuffer contents, but is it possible to create new AudioBuffer in Worker? I couldn't manage to do so. Help?

joeberkovitz commented 9 years ago

@sheerun At the moment, regular WebWorkers do not have access to AudioContext or any of the Web Audio API interfaces. There is a separate issue #16 that WebWorkers be allowed to create their own AudioContexts (with each context private to the Worker, since no one's proposing that audio graphs be shared across multiple Workers). If this proposal is adopted, then AudioBuffers could indeed be instantiated in a WebWorker.

This proposal does not apply to AudioWorkers, which run in a dedicated audio thread that is already associated with some AudioContext. AudioWorkers do pure signal processing on samples and can't access nodes in the graph. Thus they don't absolutely need to work with AudioBuffers (although it might still be useful to have them do so).

joeberkovitz commented 9 years ago

Actually, #16 was adopted by the WG in our last review session -- my mistake. So it should be possible in a forthcoming implementation to decode and otherwise create AudioBuffers in Web Workers.

ghost commented 9 years ago

Thanks for the reply, @joeberkovitz, it's appreciated. The transferable AudioBuffer thing isn't a big issue really, as you said the sample data is transferable already, it's just one of those "would be handy to have" features.

andremichelle commented 9 years ago

Are there any updates on a release date?

ghost commented 9 years ago

I submitted a Chromium issue for the implementation of Audio Workers in Chrome because I'm seeing no movement at all regarding this.

padenot commented 9 years ago

We are in the late stage of the specification for this node, but some more work is needed. Since the specification is not done, it's perfectly normal for the Chrome folks to not have released an implementation. It goes the same for Firefox.

The current status and the reasons why this has been somewhat delayed are outlined in this email: https://lists.w3.org/Archives/Public/public-audio/2015JanMar/0123.html.

ghost commented 9 years ago

Fair enough, thanks @padenot. I will go back into hibernation for another few weeks :)

Would it be okay to get one or two progress updates posted here in the meantime? I have a keen interest in this API and a head full of ideas that should put Web Workers to good use.

padenot commented 9 years ago

It is likely that there will be progress this Wednesday.