Open olofson opened 11 years ago
This is a really complicated issue from the design point of view, so I'm delaying this until I have some proper use cases to look at.
Also, #13 could cover this to some extent for now, as that would allow extra channels to be used as effect buses. One might throw in panmix variants with one or more sends, to make this easier to use in instruments/sounds.
The strict tree structure of the DSP graph is nice, simple and handy in many ways, but it becomes a PITA when dealing with "send" effects in the typical MIDI/studio mixer sense.
As it is, you can only send audio up the tree (towards the root/master output, that is), which means the only way we can actually send audio to effect units (reverbs, choruses etc) is to pass it along with our output using additional channels. That actually seems rather neat in theory, but it requires support in the A2S language, and a bit of logic to avoid wasting cycles on send buffers that are just to be passed along up the tree.
Another option would be to introduce a way of sending audio to arbitrary voices in the graph. However, that violates the tree graph design, and from the processing order/dependency point of view, this has the same problems as arbitrary processing graphs. This adds complexity and overhead, and we also need to deal with the loop issue in some well defined way - probably by prohibiting loops altogether. Also, how would we even address "remote" nodes from within scripts? (Remember: The voices are not static objects! We can't declare "insert points" in programs, as there can be any number of instances (including 0) of any program running in various places of the graph.