Open goretkin opened 9 years ago
In general, yes, play(node::AudioNode)
adds the node to the render chain, and other methods of play
typically wrap the thing inside an AudioNode
and then call play
on that node.
Also, correct that returning a short buffer in a render
call is an indication that the node has no more content and it's done.
I agree the way the input is handled in the render tree is not obvious, but you can think about it in a functional context, where you're calling the render
function with the input as one of the arguments. Most AudioNode
s ignore the root input, but the InputRenderer
just returns it, sort of looping it around.
Maybe I'm being dense but I'm not sure what issue you're describing. Do you have a more complete example that shows the issue? ArrayRenderer
shouldn't be feeding input into an InputRenderer
, and the assertion you're referring to is making sure that the block from the hardware input is the same
size as the hardware block size, which it should be.
In general, yes, play(node::AudioNode) adds the node to the render chain, and other methods of play typically wrap the thing inside an AudioNode and then call play on that node.
My specific confusion regarding that point is that InputNode
really just passes through the audio signal. so play(AudioInput())
plays back the audio input only because the hardware input is passed through InputNode
. I found the name play
misleading, because if what you do is, for example play(a_node_that_stores_all_its_input)
, then you're actually recording an audio stream. I don't think the name necessarily has to change, though.
It just seems like nodes that take in audio and produce audio should be able to be composed together, and so there should be some specification, maybe, for what a node needs to be in order to be composed together. For example, maybe I test an FM modulator, say SinOsc(40*AudioInput()+400)
by attaching its input to the output of ArrayRenderer
(using something like https://github.com/goretkin/AudioIO.jl/blob/master/src/nodes.jl#L508). You can argue that instead of using ArrayRenderer
you should have something like an ArrayStream
.
Similarly, something else that confused me initially, but I think I've come around, is that device_input
really does mean device_input
. Nodes should be able to have multiple inputs, and so control inputs, like to control the frequency of SinOsc
probably are better off, as currently is, as nodes that are passed to nodes, and the higher node is responsible for calling render
on the lower node, passing device_input
.
(Along the lines of "contract" it might be worthwhile to state that if nodes need long-term access to values in device_input
, or any output of render
, then they should make a copy. Also, something should be written about how signals are "pulled" through instead of "pushed" through (or something like that).)
I think the package is really great! I've had a lot of fun.
ArrayRenderer may return a buffer that is smaller than
info.buf_size
, since the array might not be a multiple ofinfo.buf_size
. But InputRenderer currently asserts that the size of the input buffer must beinfo.buf_size
.I think the idea is that a node can signal that it is done by returning an incomplete buffer. This is the default behavior of the
render
function forAudioNode
It took me a while to get the idea, but
play
really means "hook output of node to input of mixer and input of node to output of mixer" andInputRenderer
is really just a pass-through. But In general, you would want to compose nodes so that the output of one node goes to the input of another node. In fact, SinOsc{AudioNode} could have maybe been written like this, so that it took its frequency from itsdevice_input
instead of from another node.