Closed elanhickler closed 7 years ago
To set frequency, combining current note, octavte, and tuning, do you:
OR
What about when you want polyphony? Do you instance SpiralGeneratorCore with bassclass MonoSynth? Or does SpiralGeneratorModule have a bass class PolySynth, and somehow PolySynth instantiates an array of SpiralGeneratorCores?
yes, that's almost right. except that the core should not "have as little library-specific code as possible", but none. nothing. totally zero. i mean, the core dsp code should not in any way be dependent on juce. it's typically an object from my rosic library or one that could potentially be included there
module is the glue between the core dsp and juce/jura framework. it handles parameters, state recall, host communication, etc. it can also create a hierachical tree of child-modules.
and the editor, yes - that's the gui. it doesn't couple directly to any dsp code - it always goes through the paramaters of the module. the widgets couple to a Parameter object and that in turn calls a callback function - which can be a member function of the dsp object or, as you like to do, a lambda function defined driectly in the module (i think, i'll adopt things like that for my stuff soon also)
as for the spiral generator core being subclass of MonoSynth. that's no problem since MonoSynth is a totally free standing class - no juce dependency here. the main criterion fro what goes to "core" and what to "module" is not so much whether or not it contains audio only but whether or not there are any juce dependencies.
I think I need a new class, something like JerobeamSpiral, and then SpiralGenerator would have a member of JerobeamSpiral.
MonoSynth holds functions like triggerAttack, triggerRelease, processSample, isSilent... I don't know what MonoSynth should have vs Core.
Also, how would a polyphonic synth work?
as for polyphony: you could instantiate an array of cores of the whole MonoSynth (subclass), but that might be wasteful because the voices could share a lot of data. all the breakpoints in the envs are the same - but each voice would have its own copy of it. that's why in rosic, i already have facilities to have a polyphonic breakpoint modulator. in straightliner, i use polyphonic versions of the osc-/modulator and filter-classes. i'm not really sure how to make a re-usable polyphonic design that is easy to use and avoids all the data-duplication that naive array-instancing of the whole synth would require. i guess, i'll have to think about it soon enough...
each voice would actually have the same set of parameters and only a little bit of additional per-voice state (like current note, velocity, etc)
i typically have some "master" object that holds the full parameter set and "slaves" that contain per-voice state. maybe the slaves should have a pointer to the master for accessing shared state...and/or the master has pointers to the slaves to inform them about changes of that shared state (in straightliner, i think, i did only the latter)
I'm looking at AciDevil.
You have
This is why I think I need a JerobeamSpiral class. I need to make a bunch of them, Torus, Words, Radar, Boing... and you wanted these, right? Maybe I can create them for you.
Is there a better example than AciDevil?
AciDevil is only monophonic so it doesn't have these complications. better look at straightliner for an example how i handle polyphony. but i'm not sure, if i will continue handling it this way. maybe i can come up with a better design for my new rapt library.
it might be nice to have the jerobeam generators in chainer (alongside with my raybouncer and whatever else is to come). actually my offer was to code/port them with the provision that i may then use them in chainer but that's your call
I'm still trying to wrap my head around a monosynth and what goes where. I think monosynth should hold all midi, note on/off, attack, release phases, daw specific stuff, anything not specific to spiralgenerator. Monosynth can also hold stuff I want to reuse like panning, amplitude, frequency offset variables, and output limiting/clipping.
...but porting them is not useful to me, I need an entire synthesizer with LFOs, envelopes, GUI, I was planning on doing all that myself.
It would only be useful if you helped me make this.
In any case, you can have the algorithms for your library.
yes, i would just like to include the raw algorithms. each as a simple class that can be thrown in as source - along the lines of RayBouncer.
the MonoSynth class is actually only there because i needed something simple for the first version of Chaosfly - which was a simple gui-less vst plugin in the beginning and we were using my RSLib codebase...with no access to rosic. now, with rosic, i actually have things like that in more sophisticated ways. there's a "PolyphonicInstrumentVoice" class which is not quite unlike the "MonoSynth" class. but with more features and stuff. ...and then, of course, the "PolyphonicInstrument" class (of which Straightliner is a subclass)
It would only be useful if you helped me make this.
actually, the purpose of my framework is to throw such things together easily with some high-level code - like a class that defines, which modules are used and how they are wired together
https://github.com/RobinSchmidt/RS-MET/issues/69#issuecomment-326848023
you haven't explained how to place an LFO in the GUI. I need more explanation. It doesn't matter how easy your framework is if I don't know what's going on.
Here's another thing I could offer you. If/when I understand your library enough to create something from scratch I would gladly write tutorials (along with images, maybe even videos!) to go along with your documentation.
I guess I can look at Chaosfly for how you did the breakpoint editors.
What can I look at in your library as a good thing to follow to a T for spiral generator and all classes and structure and such?
follow to a T
what does that mean? actually, yes, chaosfly is a quite good example how to include submodules and place their sub-editors on the main gui. the editor component-tree more or less parallels the audio-module tree. its just important to wire it all up correctly in the constructors such that each sub-editor receives its pointer to the edited AudioModule. look at how the
jura::BreakpointModulatorEditor *editorModEnv, *editorAmpEnv;
members of ChaosGeneratorEditor are created and initialized and how they receive the pointers to the editees. also, the AudioModules themselves may have to receive pointers to their core dsp objects. look at the members:
BreakpointModulatorAudioModule *modEnvModule, *ampEnvModule;
of ChaosGeneratorModule. these submodules receive their pointers to the dsp object in the constructor. this way, i can have a core dsp object that has some hierarchy built in, later wrap an AudioModule hierarchy around that and then later even create an editor hierarchy. it works the same way with oscs, filters, effects, whatever. i need to write it up better some day
follow to a T = follow exactly as closely as possible, every detail, every aspect
so, you want to integrate BreapointModulators into SpiralGenerator ...and then use them for its parameters via the mod-system? in this case, your core spiral generator should probably not include core rosic::BreakpointModulators (such as the ChaosGeneratorCore does, having them hardwired). instead just have the BreakpointmodulatorAudioModules in your SpiralGeneratorAudioModule class (they will create their own core-objects and take ownership, if you don't pass a pointer to an existing object)
have in your SpiralGeneratorModule a ModulationManager object lying around and register these BreakpointModulators as ModulationSources there. and use ModulatableParameters and also register them (as ModulationTargets) with the manager.
So, should I not have a subclass of MonoSynth?
your core class still needs to remain subclass of MonoSynth
that doesn't make sense, MonoSynth is my own class that has nothing to do with your library. Are you just suggesting it remain a subclass?
yes....class SpiralGenerator : public MonoSynth
that should remain as is.
but i think you will need to change your noteOn/Off functions in SpiralGeneratorModule to pass these events also into the BraekpointModulators (so they can trigger)
I made JerobeamSpiral class. Do you suggest any other name?
Anyway, it has a frequency member and a setFrequency function.
Where do I store Octave / Pitch Offset / Detune, etc? So that I can send the final frequency value to the member frequency? MonoSynth? Core? Module?
are these core algorithm parameters? then core. or are these parameters of the midi-state? then MonoSynth
neither. They will eventually be modulatable parameters. They are "convenience user controls".
Octave/Pitch/Detune are standard controls on almost all synthesizers ever made. But it would be redundant to have these in every underlying algorithm in my/your library. So they should be put higher up or on a subclass. So I guess MonoSynth? what if I later want to add another factor such as... harmonic multiplier? That would be currentFrequency + currentFrequency*multiplier.
then put them as parameter objects in the module. ...you could define a SynthAudioModule baseclass that already has them and that you then use as your baseclass for your concrete module (SpiralGenerator)
if you go down this route, you may also want to make SynthAudioModuleEditor baseclass that already has control/sliders for them
i, for example, have a PolyphonicInstrumentAudioModule baseclass that has such things like load/save widgets for tuning files, master tuning, etc.
your requirement may not be exactly the same like mine, that's why i suggest to make your own baseclass. functionality that we both always need, i could also put into some baseclass in my jura library
What about something simple like gain? Does every algorithm need a gain? That's just output = output * gain.
hmmm...i think, i can imagine algorithms that don't need a gain parameter.
anything that doesn't change the gain, in case of effects
i mean output gain, every synth needs an output gain.
Oh I'll put that in Module because it has a processBlock callback, i can stick on gain before it goes to the inOutBuffer.
every synth, ok, but what about effects?
.........................
say, a pitch-shifter that doesn't affect the gain at all
when I said "every algorithm" i meant the category of algorithms that we are discussing!
edit: i.e. synths
yeahh, ok, i think, my synth baseclass also has a master-gain parameter
I don't get how you could put octave/tune in Module when the triggerAttack callback is in Core. When we trigger attack we need to set a new frequency and take octave/tune into account. We can't take those into account unless they are accessible by Core... but they are stored in Parameter of Module, so Core can't recalculate frequency on a new note.
I will make a pitchOffset variable for MonoSynth.
Ok I made a virtual "triggerFrequencyChange()" function for monosynth, that is called when setpitchOffset() is called or triggerAttack is called.
I am adding amplitude
to MonoSynth
, that will be like pitchOffset
in that I can set amplitude
with eventual envelopes and modulators.
spiralGen
refers to JerobeamSpiral
, underlying algorithm for Core class.
amplitude
comes from bass class MonoSynth
I don't get how to setup a modulation manager.
in Module:
in Module constructor:
somewhere have a ModulationManager object lying around, a pointer to that object should be passed to the constructor call of the ModulatableAudioModule baseclass
What is the syntax for passing something to the ModulatableAudioModule
base class?
I have this:
this:
this:
causes nullptr error
ok I changed modulationManager *
to just modulationManager
.
your wiki made no mention of metamanagers
every ModulationSource that should be available must be registered with the ModulationManager object
how? what is the function? this?
but then I get this error:
I GIVE UP.
ok - i guess, i should wire up the first modulator for you, then you can add the rest
what - you call "register" and your debugger ends up in "deRegister"? it seems like you are trying to deRegister a source that has not previously been registered. hmmmm...dunno...need to have look at the code
i believe this is solved
3 main classes to build an audio plugin
Core
Holds the core dsp code with as little library-specific code as possible.
Module
Wraps
Core
into a complete set of parameters, effects, frequency control, amplitude control, modulations, envelopes. Calls such thing as processBlock, setSampleRate, noteOn, noteOff, other midi callbacks, host bpm handling.Editor
Wraps
Module
into a GUI editor with knobs, buttons, right click menus, dropdown menus, visualizations.usage
So,
Core
would have a member called "frequency" which you would set via theEditor
which could have "octave" and "tune" sliders.Is my description good so far?
Robin, I had MonoSynth declared as a subclass of SpiralGeneratorCore, but that doesn't make sense, because SpiralGeneratorCore is the core dsp object, it should have no handling of midi, polyphony, etc. Correct? I have this currently, seems a bit redundant (
spiralGen
refers toSpiralGeneratorCore
):Because Core holds the midi callback functions (through
MonoSynth
), I don't know the current note, so I can't calculate Octave/Tune frequency offset because that requires knowing the current note. So, shouldn'tMonoSynth
be a subclass ofModule
, NOTCore
?