Closed Pomax closed 5 years ago
I don't see why the webaudio needs to provide this. You scheduled these things and they will happen when you said they should happen. Why do you need to know when the a ramp ends or a setValue changes the value?
Even if such an API were added, it doesn't seem like it would be any better than using a setTimeout.
What is the use case?
Let me turn that around on you: why would I schedule an independent setTimeout when the change call itself already encodes all the information necessary? As a JS API, having the events available makes for cleaner code, and also means that if I the user controls the ramps (say, I'm writing a Synth) I don't need parallel code all over the place. I just need to do what we do for every other JS API: listen for the start/stop events in whatever component is responsible for handling that, even if it knows nothing about what's even generating those events.
Let me turn it around. Why do you need to know when an audio param automation happens? You scheduled it and it will happen exactly when you said it should happen. If you had an event, what would you do with it?
What is the actual use case?
Apologies for the verbosity; small code snippets did not allow me to convey the difference events make, compared to having to add setTimeout calls, in "real" code.
With that said, allow me to illustrate this with a common synth concept: portamento from one key to the next. I'm currently working on a drawbar organ synth using webaudio so let's look at portamento in that context. A single note is comprised of eight oscillators set to frequencies that harmonise with each other, and in order to slide from one key to the next I can (and do, because I have no choice right now) use setTimeouts to make sure that code is triggered at (roughly) the right time. However, those timeouts make things super gross compared to hooking into timing events to trigger code when it needs to run, leading to code duplication. and generally making the codebase harder to maintain and extend (both by the original author and others) than should be necessary.
So, first the code with timeouts:
import { UI } from "somewhere/ui.js";
class Key {
constructor() {
// set up a bunch of oscillators for this key.
let osc = this.oscillators = ...;
// and set up a gain node because at the very
// least an oscillator needs to play/stop with an
// attack/decay envelope instead of clicking.
let volume = this.volume = ...
}
...
portamentoTo(nextKey) {
this.nextKey = nextKey;
let when = context.currentTime + 0.100;
// effect portamento for each oscillator
this.oscillators.forEach( (osc, i) => {
// set up the value ramp
let endValue= nextKey.oscillator[i].frequency.value;
osc.frequency.linearRampToValueAtTime(endValue, when);
// Then schedule a crossfade so that we can turn off this key
// while activating the next key after the ramp finished.
// Note that we need to recompute the interval each time,
// because ramps use absolute time, but setTimeout uses
// relative time. And this code runs every time this function runs.
let interval = (when- context.currentTime) / 1000;
setTimeout(() => this.crossFade, interval);
// Rather than just triggering "when a ramp happens", this code
// has to manually call the UI to do "something". For a single function
// that's not terrible, but in the full code this won't be a single function,
// the UI will need to be invoked separately in each function.
let startValue = osc.frequency.value;
UI.handleOscillatorRamp(i, startValue, endValue, audioContext, when);
}
}
...
crossFade() {
// set up a simple cross-fade
let when = context.currentTime + 0.050;
this.gain.linearRampToValueAtTime(0.001, when);
this.nextKey.gain.linearRampToValueAtTime(1.0, when);
// and now we need another setTimeout if we actually
// need to clean up once the cross fade is done, requiring
// a timing transform again. And this code runs every
// time this function runs.
let interval = (when- context.currentTime) / 1000;
setTimeout(() => this.doWhateverCleanupCanHappenHere(), interval);
// Also note the above code will _only_ kick in for this one function.
// If we have another function for, say, just a fadeout because
// we turned off portamento and we let go of a key, that function
// needs to duplicate the timeout call, or at the very last duplicate
// the call to a scheduling wrapper function.
}
}
class UI {
...
handleOscillatorRamp(sliderIndex, startValue, endValue, audioContext, audioWhen) {
// start an automation visualisation - the user should not
// be in control of the UI while this happens.
this.disableSliderInteraction(sliderIndex);
// compute when this transition is supposed to be done, because
// the audio context and setInterval use different timing.
// And this code again runs every time this function runs.
let interval = (audioWhen - audioContext.currentTime) / 1000;
setTimeout(() => this.enableSliderInteraction(sliderIndex), interval);
// let's assume we already have a function on the UI side that
// makes sliders slide over time.
this.startSlide(sliderIndex, startValue, endValue);
...
}
This is not good code. We have no separation of concern, we're transforming timing values from one timing domains to another all over the place ( we could write a util library for this, but that doesn't stop us running that transform each time). there is a large surface for potential bugs, and the use of timeouts all over the place would make any dev go "what the hell is going on in this code, why are you not using the normal event system", so rewriting this to code that can take advantage of events:
import { UI } from "somewhere/ui.js";
class Key {
constructor() {
...
this.setupListeners();
}
setupListeners() {
// Ensure the UI will get notified about ramps. We don't know, nor
// do we care about, what that actually means in terms of UI code.
this.oscillators.forEach( (o,i) => {
let f = o.frequency;
f.addEventListener("rampstart", e => UI.handleOscillatorRampStart(e, i));
f.addEventListener("change", e => UI.handleOscillatorChange(e, i));
f.addEventListener("rampend", e => UI.handleOscillatorRampEnd(e, i));
// and listen to a rampend, ourselves, so we can effect crossfades:
if (i===0) {
f.addEventListener("rampstart", e => this.crossFade());
}
}
// also listen to volume changes: if it drops below "close to zero",
// we want to clean up whatever needs to be cleaned up, and inform
// the UI of our new volume. Again, we don't care what the UI does
// with that information.
this.volume.gain.addEventListener("change", e => {
this.processGain(e);
UI.volumeAdjusted(e, this);
});
}
...
portamentoTo(nextKey) {
this.nextKey = nextKey;
let when = context.currentTime + 0.100;
// Set up the same ramps as before. But that's all we have to do, there
// no followup code because the event system will do the right thing.
key.oscillators.forEach( (osc, i) => {
let endValue= nextKey.oscillator[i].frequency.value;
osc.frequency.linearRampToValueAtTime(endValue, when);
}
}
...
crossFade() {
let when = context.currentTime + 0.050;
this.gain.linearRampToValueAtTime(0.001, when);
this.nextKey.gain.linearRampToValueAtTime(1.0, when);
// Again, that's it. Event logic does the rest.
}
...
processGain(evt) {
if (evt.value < 0.05) {
this.doWhateverCleanupCanHappenHere();
}
// And this part is important too: this function will kick in
// for _any_ change to this key's volume. No functions need
// any code to manually trigger this final part in handling a
// volume change.
}
}
// This class sees the most drastic change when we have events
class UI {
...
handleOscillatorRampStart(evt, oscillatorIndex) {
this.disableSliderInteraction(sliderIndex);
}
handleOscillatorChange(evt, oscillatorIndex) {
this.showOscillatorValue(sliderIndex, evt.value);
}
handleOscillatorRampEnd(evt, oscillatorIndex) {
this.enableSliderInteraction(sliderIndex);
}
volumeAdjusted(evt, key) {
// do whatever we want to do here when a key's volume is adjusted.
// Maybe change the mixer's coloring, maybe grey out the "for looks only"
// on screen keyboard, maybe we want to write some aftertouch code later,
// but whatever it is, we can do that _here_ in the UI class, where that code
// belongs. Not spread out over other classes.
}
...
}
Having these events makes a huge difference in terms of code quality and maintainability. And remember that any tiny benefit in the above code generalises to "having multiple functions that all need to have code in place without events, that need zero code in place with events". The fact that we get a single processGain()
for free is not the important part, the fact that no other function will need code to call it ever again is one of the critical improvements that event based code brings to the table.
(and on a related but slightly tangential note: the "change" even is interesting because "how often do we fire this event? it could fire millions of times per second!". Which is true, but could be trivially set to "only as fast as the code that backs requestAnimationFrame allows", with the same caveat that devs on the hook for writing event curbing similar to what they're already writing for making sure scroll handlers don't lock up the JS thread, and consequently, the entire page.)
Thanks for the code; it will take me a while to look it over.
First, I'm not a musician, so I had to ask @hoch for help. He said portamento is usually implemented with just one oscillator sliding from one frequency to another. You don't need two oscillators. this also means you don't need a cross fade, simplifying things a lot.
For your cross fade, you don't need a setTimeout. You should schedule the fade doing something like this:
gain.gain.setValueAtTime(1, time + 0.1);
gain.gain.linearRampToValueAtTime(0.001, time + 0.1 + 0.05);
next.gain.setValueAtTime(0, time + 0.1);
next.gain.linearRampToValueAtTime(1, time + 0.1 + 0.05);
Hence, no setTimeout needed at all.
I also don't understand why you need this for the UI. You know exactly when these things happen (because you had to schedule them), so the UI can move it's sliders around based on when you scheduled things to happen. I doubt any one will care if the sliders are a bit off. (They could be off anyway, even if the automations provided events.)
Finally, this causes a huge number of cross-thread communication between the audio thread (where the automations are actually run) and the main thread, where the event listeners are. This is because the audio thread can't know if there are event listeners or not. (I suppose the main thread could inform the audio thread, but since they're on different unsynchronized threads, you open even more issues.)
I not sure I understand your last statement: there are already several events associated with AudioParams, why would a rampstart and rampend introduce any problems there? I can certainly see how rapid-firing a rampchange event would be tricky, but the single rampstart, rampend, and regular change (for things like blah.value=X
or blah.setValueAtTime
, not one that fires during the ramp) shouldn't cause any problems at all. So if the rampchange event is technically problematic: I care far more about at least having single-fire events for rampstart/end and instantaneous changes than no events at all.
I just want to write nice code. And this API right now isn't user friendly enough in that respect.
There are no events for AudioParam
s. Where do you see that in the spec?
https://developer.mozilla.org/en-US/docs/Web/API/AudioParam lists seven of them - if those are not actually part of the AudioParam then I guess this is a true feature request rather than "can we add some more" and I understand the resistance to it better.
Though of course I'd still advocate putting the simple, non-thread-complicating ones in with a caveat sentence going "There is no timing guarantee on when these events get sent to listeners by the Javascript event system, only that they are generated in response to the relevant event occurence", so that people who were thinking that they might get better timing resolution than setTimeout
can offer, rather than just more convenient code that lines up better with modern JS, know what to expect?
I think you are misinterpreting what the events there are. They are actually referring to the AudioParam automation "events". These have no relationship to the JS events you're looking for.
Oh I see! In that case this is just a request for "events" rather than "more events" =)
As this is a feature request it cannot be considered for V1 as we are now in CR. We will reconsider this for v.next.
An obvious use case is to fade-out some music to silent and then stop the source when the fade-out ends. "rampend" would be the perfect event to know when to stop the source. Otherwise you have to separately store the timestamp when the ramp is due to end, set up a requestAnimationFrame loop, and then constantly poll the value to see if it's finished yet.
You set the time for the ramp end, necessarily, so you know exactly when it ends. Why can't you do stop(rampend)
to also stop the source? No need for an event or storing a timestamp for rAF.
I must be missing some context here....
It's a convenience feature. There's onended
for AudioBufferSourceNode, for example. Would you say that's not necessary either because you can just work out when it ends by adding its duration to the current time? Sure, you can do that, but real applications often want to do something interesting when the playback ends. This includes non-audio related things, like managing a simultaneous visual transition, going to a new level in a game, display a notification to the user, etc. So it's a useful thing to provide and makes it easier to do things. One obvious use case for this is a game which waits for a fade-out to finish before going to the next level.
From our point of view we also actually develop a framework that sits in between Web Audio and the actual application code. So while an actual application might do something like stop(rampend)
, our framework instead will fire our own onrampend event (which we currently have to emulate), then it's up to the calling code what it does with that. It might stop the audio buffer, it might not, it might co-ordinate a visual transition, it might go to a new level in a game, etc.
Meanwhile to emulate the event we have to keep track of how many ramps are currently in progress and register and de-register rAF callbacks for the first and last ramps while regularly polling the audio time. This all increases the complexity to do something pretty straightforward.
IMO this is a no-brainer.
One obvious use case for this is a game which waits for a fade-out to finish before going to the next level.
I know you're the game expert here, but it doesn't make sense to me that you get a "critical cue" for the next level from the end of a sound bite. What if it does not fire an event as you expected? Then the game is stuck there forever. We have to assume that anything can happen because of the dynamism of Web Audio API. The scheduled param event can be cancelled and it might not reach the end of the ramp. Or it can even be overridden by other scheduled param events.
Furthermore, I speculate this feature will encourage bad practice by stitching AudioParams and some UI/visual changes like an ad-hoc fashion. You'll be much better off having your own routing/scheduling framework.
I think it might look convenient at a glance, but nothing is easy when AudioParam is involved. I am sure that we will discover has multiple corner cases as we start speccing it.
So... why do we have onended
then? A ramp end event is essentially the same, but for a fade-out.
Another reason to fire events is that the application might be managing additional state in parallel to the playback. For example since AudioBufferSourceNode does not provide an isPlaying
flag, we set a flag at the same time as calling start()
, and then unset it in onended
. This is an example why stop(endTime)
is not sufficient; you need an actual JS callback at that time. The same goes for an isFading
flag.
I knew that you're going to ask the question. :)
The stopping mechanism of AudioBufferSourceNode is "irreversible". Once it is scheduled, it will happen no matter what. But as you already know, AudioParam is not the case. The scheduled parameter events in the queue can be changed at any time (the order, the final computed value and etc). The difference between these two might look subtle, but it is NOT.
Also the change in AudioParam affects all the AudioNodes. The actual impact of this change should be widely discussed and assessed.
Perhaps this is a good sign that we should consider a different playback/event model in V2? V1 already went to CR, so I am a bit doubtful we can include the change to V1 at this point.
It is currently impossible to trigger code based on AudioParam changes (gain, frequency, detunes, etc), requiring code that relies on
setTimeout
or other "parallel counting" tricks and hoping the timing works out.I'd like to request four events be added to AudioParams to make it easier for developers to write code that is triggered by parameter changes, one based on the standard concept of DOM value changes relating to stable value changes, and three that deal with from/to changes over time, mirroring how CSS already solved event signalling in that context:
node.param.value = ...
ornode.param.setValueAtTime
node.param.somekindofRampToValueAtTime
interval startnode.param.somekindofRampToValueAtTime
interval endsAs Web Audio already has its own timing built in, having to write code that uses a parallel
setTimeout()
just to do even something basic like "once the exponential ramp in the gain from current to 0.001 ends, stop the connected oscillator" seems the by far less preferred option compared to something likegain.addEventListener("rampend", evt => oscillator.stop())
.