Closed ugur-zongur closed 8 years ago
I don't think you want binding here. You just want .connect() on AudioParam, to enable it to drive another AudioParam (or be a source, which would solve the "I need a DC offset" problem).
Actually ... is there a good reason to have this complicated behaviour in AudioParam in the first place? Why aren't parameters just "slots" to which any AudioNode
can be connected, and automation functionalities currently handled by AudioParam
would be handled by a first-citizen AudioNode
called for example RampNode
? That would make the API simpler and more consistent.
In fact i needed this behavior while trying to design a pure data clone for web audio api. Control rate messages of pure data can be implemented in an audioworker without any problem. But when these messages need to change a dsp object parameter (e.g. line~) i have no option other than message passing through ui which has no real-time guarantees. Automation is not helpful because the control rate behavior is calculated real-time and is not known beforehand (because of the possibility of data flow in audio to control rate direction). There needs to be some kind of inter-node communication in this use case.
This was 2nd use case in my original post. I think 3rd case is also crucial. Think about a web audio synth that is controlled through midi. If a user wants to use this software in a live performance, there should be minimal latency possible. So ui thread shouldn't be involved there also. Will the midi message processing be made available to audioworker, the midi data is stuck in that audioworker and cannot be passed to other audionodes since there is no realtime internode (especially with native ones) communication. I don't know if i'm missing something here but as far as i see in both cases native audionodes cannot be used effectively due to the fact that audionode parameters cannot be changed in realtime.
if by "pure data clone" you mean full-blown JavaScript clone of Pd, you should check this out : https://github.com/sebpiq/WebPd it's far from perfect, but there is already a lot of work done.
Yes i know that one, but i preferred emscripten for performance :) so the codebase is c++ actually. Besides WebPd couldn't use native audionodes, its author had some complaints in the past (http://lists.w3.org/Archives/Public/public-audio/2013OctDec/0073.html) Thanks for sharing anyway.
Hahaha ... yeah I know ;)
If you wanna use emscripten, you'll have everything running in one AudioWorker right? So 1) and 2) shouldn't be a problem.
:). Yes all the control rate computation will be in one node. But for dsp part i want to exploit native nodes as much as i can. So for dsp graph i have one-to-one mapping to web audio graph in mind. I want to use native audionodes where available. But this results in the problem i stated. I need a mechanism to pass the values calculated in the control rate node on the same batch. So i still think they are problem.
Well ... this is exactly what I had in mind for WebPd, but this is very hard to achieve, for the reasons listed in the discussion you linked above (which AudioWoker partially addresses) ...
Also, there are very few native nodes that can be used to reimplement Pd objects. Even the AudioBufferSourceNode
cannot be used for implementing a simple tabread~ ... and let's not talk about the mess that event scheduling would be.
Trust me, it is really not worth it. You will write a lot of ugly code to glue all together, trying to reuse just a very small subset of the native functionality from Web Audio API (I've been there : ), and maybe even create a performance overhead if you have a lot of separate AudioWorkerNodes as opposed to all the dsp running in a single AudioWorkerNode.
This said, if you really want to try, I'd be very curious to see what you come up with. But I am pretty sure that waa native functionality won't be very useful for you...
@cwilso i think it should be a two way update mechanism so that the code below is valid.
onaudioprocess= function (e) {
e.parameters.envelopeGain.value = e.parameters.envelopeGain.value * 0.5;
}
Because there are cases where (snapshot~ object in PD) a data value from previous batch is required.
@sebpiq thank you for the advice. i'm in the design phase right now so there're certainly things i'm missing. I'll keep you informed and maybe ask you for further advice in the future if it's ok? :)
@ugur-zongur sure, I can help. And actually I have been desperately searching for people to help on WebPd. So if you feel like giving a hand, that would be awesome. I am quite open about how we do it since I haven't found a satisfying way until now. If you get good results with your experiments, I'd be happy to take it into WebPd. Good luck!
If you need to connect an AudioParam to an AudioNode then check out:
They use the WaveShaperNode or AudioBufferSourceNode to provide a DC-offset into a GainNode - and then allow its "gain" property to be connected to other nodes. This should allow you to use MIDI notes as inputs to your PD-like audio graph.
@sebpiq Yeah, we could get rid of AudioParam altogether and only have LinearRampNode, LogRampNode, DCValueNode, LogTargetNode. But it would make it MORE complex to do simple cases - certainly
n.frequency.value = 1500;
Would be a bit of a pain, and definitely less understandable, and most of all less efficient:
context.CreateDCValueNode(1500).connect(n.frequency);
@ugur-zongur that code is valid. Given that you're assigning one value to half of itself, I'm not even sure what you mean, precisely, if you're intending a live connection there. But I think a connection is better than an assignment, which is why I think just adding .connect to AudioParam would fix this.
@pendragon-andyh That's not really connecting an AudioParam to an AudioNode per se; he's using an AudioParam in another part of the graph and just copying a reference to that node, much like copying references on a chorus "node". If we just added .connect on AudioParam, and make them instantiatable, I think this would address @sebpiq's issue too.
@cwilso it was 4 in the morning :), the code has something to do with the solution i had in mind, wrote it without comprehending yours i suppose. I think i get it now and yes, i also think it's better. Just to be sure, according to your solution, AudioParams will be available to be connected to and from, which implies they can be considered like named inputs or outputs for the AudioNodes now right?
@ugur-zongur AudioParams already can be connected TO - like a named input, as you put it - the connections sum and are added to the computed value (calculated from the scheduled values and any .value). However, they aren't currently available as a source (i.e. you can't .connect() them to another node's inputs.)
@cwilso yes that was what i meant. I should have emphasised "outputs" :). Thanks for clearing that up.
@cwilso
have LinearRampNode, LogRampNode, DCValueNode, LogTargetNode
No just RampNode
with the same methods as AudioP aram
's
n.frequency = 1500
context.createRampNode(1500).connect(n.frequency)
The less concepts the better. So all in all it would be IMO much simpler to understand. Definitely not more complex...
Look at the beauty ... you remove a concept from the spec, and open-up a world of possibilities at the same time :
// Simple frequency modulation
var freqMod = context.createOscillator()
var mult = context.createGain()
var add = context.createRamp()
var osc = context.createOscillator()
freqMod.connect(mult)
mult.connect(add)
add.connect(osc)
mult.gain = 50
add.setValueAtTime(440, 0)
Right now you cannot even do this most simple FM in an obvious way.
@sebpiq @cwilso said "make them (AudioParams) instantiatable" in his answer to @pendragon-andyh. So what i understand from this, he has something like this in mind.
// Simple frequency modulation
var freqMod = context.createOscillator()
var mult = context.createGain()
var add = context.createAudioParam() // instantiatable
var osc = context.createOscillator()
freqMod.connect(mult)
mult.connect(add)
add.connect(o/c) // connectable
mult.gain = 50
add.setValueAtTime(440, 0)
So you can do this. Less concepts is better but I cannot speculate about it right now because i'll talk about more concepts now :)
@cwilso if this will be the way to go then there should be a separation like InputAudioParam and OutputAudioParam i think. Since an input to input connection, or output to output connection is meaningless. Excluding connect() for InputAudioParam would do the trick i suppose. And also removing modification functions e.g. setValueAtTime from OutputAudioParam (edit: we loose the functionality above so modification functions should exist).
Another issue, it says "read-only Float32Array" in the documentation of AudioWorker section. I don't know if this "read-only" means the array is immutable or just the ref of it is read-only but if it's the former case then i think this should change too for output case obviously.
Jumping in to sidetrack...
The instantiable AudioParam object is just a converter from audio signal to a-rate control data. I believe it is (and should be) only useful where you create your own node design with AudioWorker.
Also the obvious example of FM simply looks like a PureData patch (osc~
, *~
, and line~
) and I rather think connecting the output of AudioNode directly into AudioParam makes more sense in terms of the semantics. Having to instantiating a "ramp" is sort of PureData way of doing this. I would say it is just a different paradigm In addition, the simple FM implementation based on the current spec is not really that different to the example code above.
I partially agree that the current AudioParam design is not perfect, but it serves various use cases pretty well. Let's not forget that we have to deal with event scheduling very carefully due to the architecture of JavaScript thread. I guess the main goal of the current AudioParam design is to achieve the precise scheduling of sample-accurate automation/interpolation.
Sorry about the distraction, but I would definitely love to hear more ideas and opinions about this.
Also the obvious example of FM simply looks like a PureData patch
and a SuperCollider patch, and a chuck patch, and a csound patch, ... etc ... Pd and SuperCollider have been around for twenty years, so it would be good to take inspiration form them as they have been crafted over all this time to answer this specific purpose. Let's not reinvent the wheel.
Basically to make a proper FM synthesis you need to be able to control your modulator, and for this you need a DC (index), and you need to be able to schedule value changes for this DC. And it turns out, that is exactly what AudioParam does, except that it adds an unnecessary layer of complexity.
Except AudioParams reduce the unnecessary layer of complexity for the cases they're mostly used for - namely, controlling audio parameters on other nodes.
If you have instantiable AudioParams that are connectable, you essentially have precisely what you've asked for - a schedulable value node.
@sebpiq
and a SuperCollider patch, and a chuck patch, and a csound patch, ... etc ...
No. I was specifically referring PureData because the example code is just equivalent to PureData patch. ChucK doesn't have an extra layer for automatable parameters for unit generator. SuperCollider has the concept of a-rate and k-rate - which I believe it is very similar to what we can conceptually see in the current spec of Web Audio API.
I believe the current design - encapsulating AudioParams into the node - was a reasonable design because the API itself was geared toward to a wide range of audience. However, as @cwilso suggested, the instantiable AudioParams might be the most elegant solution to solve this type of issues.
By the way, WAAX has several classes to abstract AudioParams with more musically meaningful data. This might not be directly related to OP's issue, but it can be an example of the abstraction of AudioParams.
https://github.com/hoch/WAAX/blob/master/src/waax.core.js#L60
yeah ... probably got a bit carried away with Chuck. I haven't used it for several years.
SuperCollider is conceptually very similar to Web Audio API as you have a graph with nodes that run on the server, and an API for a client language to change some of the parameters of these nodes, ... and yeah schedule things. Pd (and Max) is also quite close, and pd also has sort of a k-rate and a-rate (messages vs dsp). So I believe both should be sources of inspiration. I don't say they are perfect of course. Both have their share of ugliness. But the basic concepts are solid and really suited for programming with sound. And yeah ... Pd and Max are also geared toward a wide audience. People (from my experience giving workshops) who might not understand - a thing - about programming nor sound. And still they manage!
I understand the reasoning behind AudioParams, and using them as a main tool for control and scheduling, but the fact that there is a need to add so many different ways of using them (plugging AudioNode to AudioParam, instantiate AudioParam, ...) makes me think that it was probably not the best decision. It makes very basic things not very intuitive to do (e.g. the DC thing) which is not good for beginners.
Anyways, I guess AudioParams are here to stay, so I will stop criticizing them :)
Can't this be addressed today (and perhaps for longer) by making use of multiple audio outputs from a node, some of which are intended to be connected to AudioParams of other nodes?
Well, you could certainly use multiple outputs of a node to be outputs. The only node that has multiple outputs today is channelSplitter - but you could, say, have the known semantic that DynamicsCompressor has a second output that is the envelope follower tracking.
However, this doesn't address the use case of "I want a DC offset" - where if you could instantiate an AudioParam, you could easily do:
var dc = new AudioParam(); dc.value = 1; dc.connect( nodeIWantADCOffsetInputTo );
And any other stuff. (for example, it would be even more obvious that you're creating and scheduling an Envelope.)
Postpone.
TPAC resolution: re-look at for V2
F2F: To be reviewed for next conference call to establish effort required to include in V1.
Can't the constructible audioparam be done with a UnitSource source node whose output is 1. This node would have one AudioParam, say, gain. Would this not allow you to construct, in effect, an AudioParam, and allow you to connect an AudioParam to other AudioParams?
I find myself creating unit sources all the time and it's really annoying to have to create a looping 1-sample buffer source for this.
@rtoy That does seem elegant, and much less of a perturbation than adding new bells and whistles to either AudioNode
or AudioParam
.
This can be done that way (by creating a ValueNode - that has a single AudioParam .value, which controls its value. I think the "elegance" in that is elegance in having to put less in the spec, rather than elegance in users using it (the pattern would result in this code:
var value = context.createValueNode; value.value.value = 5; // <- hahahaha :)
It's okay, I suppose. Constructible/Connectable audioparam would still be cleaner.
On Thu, Jun 23, 2016 at 3:56 PM, Joe Berkovitz notifications@github.com wrote:
@rtoy https://github.com/rtoy That does seem elegant, and much less of a perturbation than adding new bells and whistles to either AudioNode or AudioParam.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/WebAudio/web-audio-api/issues/367#issuecomment-228058055, or mute the thread https://github.com/notifications/unsubscribe/AAe8eTRAMZJTPNcRIx8JefxDY2muq8MQks5qOpCpgaJpZM4CuOeT .
ValueNode
is definitely a better name.
But using value
as the name of the AudioParam (the node.value.value
thing) stylistically frightens me. Maybe the param could be called output
or something else.
What would a constructible AudioParam look like? Would you have to do basically the same thing:
var p = context.createAudioParam(); // or new AudioParam();
p.value.value = 5;
Or is there some other approach you have in mind?
Regardless, I, personally, would love to have a constant value source node; I can of course work around this, but when using hoch.github.io/canopy to hack a test, a constant value node would be sweet.
Yeah, that's exactly it. The only real change is that AudioParam would need to acquire a .connect().
On Thu, Jun 23, 2016 at 5:29 PM, rtoy notifications@github.com wrote:
What would a constructible AudioParam look like? Would you have to do basically the same thing:
var p = context.createAudioParam(); // or new AudioParam(); p.value.value = 5;
Or is there some other approach you have in mind?
Regardless, I, personally, would love to have a constant value source node; I can of course work around this, but when using hoch.github.io/canopy to hack a test, a constant value node would be sweet.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/WebAudio/web-audio-api/issues/367#issuecomment-228088018, or mute the thread https://github.com/notifications/unsubscribe/AAe8eRQUA7qVp0KZup2LWTRfy6FvsDIQks5qOqZogaJpZM4CuOeT .
In that case, let me cast my vote for a constant source node (of some appropriate name) with an audioparam.
Per teleconf: The constant source node probably doesn't work quite right. If you connect, say, an oscillator to the audioparam, the output would be the 1 + the oscillator. This isn't desired. It can be worked around by having the user set the constant source node value to 0, but this might not be what we want.
According to https://webaudio.github.io/web-audio-api/#computation-of-value, this is the correct behavior. If we make the default value of the constant source node be 0, I think everything will work out as desired. Then it's up to the developer to do the right thing with this constant source node. But it will behave as if it were a constructible AudioParam.
See #902 for a proposed ConstantSourceNode
with one audio param named sourceValue
defaulting to 0.
Do you need to specify anything about garbage collection?
Maybe "the node will become eligible for garbage collection when there are no javascript references to the node AND when the node is no-longer connected to a part of the audio graph that is being kept alive by a oscillator or buffer-source node".
What should happen if the new ConstantSourceNode is connected directly to the destination node (with no supporting oscillator)? Should it cause a dc-offset until it goes out of scope ... or should it be silent because no real node is driving the graph? On 7 Jul 2016 22:21, "rtoy" notifications@github.com wrote:
See #902 https://github.com/WebAudio/web-audio-api/pull/902 for a proposed ConstantSourceNode with one audio param named sourceValue defaulting to 0.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/WebAudio/web-audio-api/issues/367#issuecomment-231211027, or mute the thread https://github.com/notifications/unsubscribe/AF9f5VKuT6bf4UjaI0VvQijCoJkSegSJks5qTW3MgaJpZM4CuOeT .
A ConstantSourceNode
is a real AudioNode
, very similar to an OscillatorNode
. I would expect it to behave the same in terms of GC just like an OscillatorNode
. Thus, I wouldn't expect to need to say anything special.
Unless, of course, you're thinking of ConstantSourceNode
as if it were a constructible AudioParam
. But it's not; it's an AudioNode
, att least how I've defined it here. The group needs to decide if this is the correct approach or not.
The ConstantSourceNode differs from other source nodes because it does not have start and stop methods.
I have not rechecked the spec, but my memory says that the audio context holds a reference to oscillator nodes until they stop - which therefore makes them eligible for garbage collection. On 7 Jul 2016 23:07, "rtoy" notifications@github.com wrote:
A ConstantSourceNode is a real AudioNode, very similar to an OscillatorNode. I would expect it to behave the same in terms of GC just like an OscillatorNode. Thus, I wouldn't expect to need to say anything special.
Unless, of course, you're thinking of ConstantSourceNode as if it were a constructible AudioParam. But it's not; it's an AudioNode, att least how I've defined it here. The group needs to decide if this is the correct approach or not.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/WebAudio/web-audio-api/issues/367#issuecomment-231222080, or mute the thread https://github.com/notifications/unsubscribe/AF9f5Qs1SRO3gn1vQYijYw19N88zLPtIks5qTXimgaJpZM4CuOeT .
Ah. The current (updated) PR actually includes start and stop methods. But @hongchan and I were just discussing whether this makes sense or not. For an AudioParam, this probably doesn't make sense. But for a source node it does, along with an onended event.
Keep the factory method = yes Name of attribute = offset Default value = 1
Binding AudioParam of another AudioNode to an AudioWorker
As far as i understand, it is impossible to change a native AudioNodes' parameter at real-time. Some use cases i can think of:
A solution can be achieved by making AudioParams bindable to AudioWorkers.
Possible WebIDL can be something like:
Example main file javascript:
Example worker code:
Since AudioWorkers are processed sequentially in audio processing graph, i think changing AudioParam in a AudioWorker's context won't be a problem in terms of concurrency.