In order to implement "plugin" effect chains in the form of separate voices/programs, we need three things:
An "input" voice unit, that allows a voice to receive audio input, and inject it into its unit graph. From the voice and unit graph point of view, this unit would be very similar to the "xsource" unit.
An "insert" voice unit, that allows a voice to insert a subvoice at a specific point in the unit graph, and run audio through it. This unit would be similar to the "inline" unit in implementation, but more similar to the "xinsert" unit in terms of voice/unit graph.
A mechanism (and A2S construct) for spawning a subvoice with an "input" unit, so that it runs under a specific "insert" unit, and has audio from the parent voice sent to its "input" unit.
Control (messages) and processing would be similar to inlined subvoices, but is probably going to need some custom logic. Maybe things can be refactored so that "inline" and "insert" are just two variants, using the same mechanisms?
In order to implement "plugin" effect chains in the form of separate voices/programs, we need three things:
Control (messages) and processing would be similar to inlined subvoices, but is probably going to need some custom logic. Maybe things can be refactored so that "inline" and "insert" are just two variants, using the same mechanisms?