sebpiq / WAAClock

A comprehensive event scheduling tool for Web Audio API.
MIT License
244 stars 32 forks source link

ScriptProcessorNode is deprecated #20

Open 1valdis opened 5 years ago

1valdis commented 5 years ago

As of now, ScriptProcessorNode is deprecated and it is recommended to look into other ways of handling scheduling such as using AudioWorklets. As I see in the source code, ScriptProcessorNode is the default way to schedule events in this library. Will there be any effort to replace it with some other way? Should the community look into implementing this?

sebpiq-kialo commented 5 years ago

Hi @1valdis ! I believe it shouldn't be too hard to implement an AudioWorklet version, I'm happy to look at any PR ;)

Also, for greater compatibility, we would probably need to support both ScriptProcessorNode and AudioWorklet and select depending on availability in the current browser.

1valdis commented 5 years ago

Hi @sebpiq-kialo :)

At the moment I'm learning AudioWorklets and experimenting with them, by the way documenting them for MDN. I'll still need some scheduling utility for my Web Audio project, and this library has a convenient API. Another pro is that the author apparently knows about the almost legendary Tale of Two Clocks :) So I'll probably look into it, but I'm not making any promises now.

If somebody else passes by and wants to implement the idea, please inform about that in this issue, so we don't make the same work twice ;)

1valdis commented 5 years ago

I'm experimenting with AudioWorkletNode.port property to communicate between AudioWorkletProcessor and the node. Here's a number of considerations so far:

  1. It appears to have delay of 2-5 ms between posting message from AudioWorkletProcessor's port and having a callback run on AudioWorkletNode. I measure by posting currentTime and comparing that to currentTime of audio context property in an event listener on the node's port. I haven't tested on heavy workloads or something like that. Is this delay acceptable?
  2. We can schedule events by posting the needed time to an AudioWorkletProcessor and having it post back when the needed time is reached. As we can't pass functions through MessagePort, we have to think about some other mechanism to keep list of events in node and processor in sync.
  3. The tolerance is not going to be needed. The AudioWorkletProcessor.process gets called every render quantum (128 samples) anyway, so we physically can't miss needed time. We have to get as close as possible to it still. The simplest option will be checking if currentTime is larger than scheduled time. Assuming sampleRate of 44100, the process function will run ~345 times per second, giving maximum additional delay of 2.9 ms. If this is not good enough, we can cut this in half by checking if current quantum is closer to the time needed, or the next one (can be calculated using sampleRate and currentTime properties in AudioWorkletGlobalScope).

The resulting precision is...quite disappointing. It can't be used for high-precision scheduling. It may cause some audible jittery if used for purposes like metronome; for scheduling of audio playing the direct manipulation with setTimeout and AudioNode.start will be superior, IMO. Unless we figure something out.

domchristie commented 2 years ago

I'm wondering if the ScriptProcessorNode code could be replaced with something like:

// create/register AudioWorkletProcessor
var processor = `registerProcessor('tick', class Tick extends AudioWorkletProcessor {
  process () {
    this.port.postMessage('tick')
    return true
  }
})`
var blob = new Blob([processor], { type: 'application/javascript' })
var url = URL.createObjectURL(blob)

self.context.audioWorklet.addModule(url).then(function () {
  self._clockNode = new AudioWorkletNode(self.context, 'tick')
  self._clockNode.port.onmessage = setTimeout(function() { self._tick() }, 0)
})

(aside: still need to add a way to stop/start)

domchristie commented 2 years ago

Also, I'm not really sure why setTimeout is needed here, and wondered if it could be removed or replaced with worker-timers? to prevent it from freezing in unfocussed windows

felixroos commented 2 years ago

@domchristie I've tested your approach and others in this post. Using AudioWorklet does not seem like a good idea, as the callbacks via postMessage are not as fast as 128 samples in 48kHz. Also, setInterval without a worker seems to happily run in the background. So if I haven't missed anything, the good old Tale of Two Clocks approach still seems to be the best option for Audio Scheduling from JS, at least until Audio Context is supported in Workers

sebpiq commented 2 years ago

@domchristie @felixroos thanks for testing this ! As far as I am concerned, that confirms one thing which I was already convinced of since the inception of Web Audio, is that it is a rubbish, cluttered, unusable API (sorry for the harsh words). Now that AudioWorklet and WebAssembly are here, all the other audio nodes will pretty much go into disuse, as all audio peeps will slowly switch to custom DSP (or using good audio libraries implemented by the community).

As a consequence, I don't think I will put more time into developing / maintaining this library (working on other, more exciting things such as this one : https://github.com/sebpiq/WebPd).

However, if you feel like contributing some updates (e.g. adapting it to AudioWorklet), I'll happily merge it in for other users :wink:

felixroos commented 2 years ago

that confirms one thing which I was already convinced of since the inception of Web Audio, is that it is a rubbish, cluttered, unusable API (sorry for the harsh words).

I would not blame the Web Audio API in this case, as the callback rate is limited by the JavaScript thread itself, not the Web Audio API. It does not matter which scheduling method you take, the JS thread will always be the bottle neck.. As long as you don't communicate to the main thread, everything runs as expected.

Now that AudioWorklet and WebAssembly are here, all the other audio nodes will pretty much go into disuse, as all audio peeps will slowly switch to custom DSP (or using good audio libraries implemented by the community).

I hope the Web Audio API will stay supported, because it's easy to use and quite powerful for not too complex tasks. I am actually quite happy what can be done with it, e.g. this is running entirely with the Web Audio API: https://strudel.tidalcycles.org/?eMXw6Z36WfWh Using WASM involves a much more complicated toolchain + knowledge, so having the option to stay within JS is pretty neat.

I now started my own mini lib for scheduling, it's actually quite simple to get a stable callback with the Tale of Two Clocks approach.

edit: WebPd looks awesome

sebpiq commented 2 years ago

@felixroos yes sorry ... I was a bit harsh haha ... but I spent so much energy in trying to hack my way around the limitations of WAA for more than 10 years now that I grew tired of it.

Problem with the tale of 2 clocks is that it's a hack. Essentially, what can (and will) happen is that some times, the deadlines for scheduling an event in time will be missed and therefore you have buggy audio code. That, plus the fact that it is (as you might have realized by developing your own scheduling lib) not so easy to get right. It's not that it is hard either, but clearly in my opinion it is uselessly complicated compared to what you can do with scheduling if you're switching all your audio code to an AudioWorklet instead. This allows for a more integrated approach, simpler, more elegant and more robust, as opposed to the patchy Web Audio scheduling with a setInterval.