RobinSchmidt / RS-MET

Codebase for RS-MET products (Robin Schmidt's Music Engineering Tools)
Other
57 stars 6 forks source link

How can I handle polyphony with your modulation system? #269

Open elanhickler opened 5 years ago

elanhickler commented 5 years ago

Is it ready for polyphony? Could I quickly implement just "every voice is a new instance"? Would it be easy to implement a switch for the connection to be polyphonic or monophonic?

Whatever is easiest, I just want to get OMS to be polyphonic in some way so we can play multiple notes at once.

RobinSchmidt commented 5 years ago

Is it ready for polyphony?

unfortunately, not yet.

Could I quickly implement just "every voice is a new instance"?

instance of what? the core dsp class or the AudioModule? for the latter case, i think that makes no sense but for the core objects, i would say, in a case, where the dsp objects are not too data heavy, just duplicating them may be ok for a quick and dirty way of doing it. but you may need some sort of "master"-object, that manages all the voices (keeps their data in sync, allocates them for playing, finally mixes them together and stuff). generally, i would prefer to have some sort of design that allows the common data to be shared among the voices - for example, all the breakpoints and other parameters in a breakpoint modulator and keeping only a small, lean per-voice state. for objects that contain even more data, such as sample-based stuff, data sharing among voices is a must. in straightliner, i implemented polyphony in a rather heavy handed unelegant way (there's some notion of "master" and "slave" objects - the master acts as interface to the framework and updates all the slaves (which are the voices).....but that's not how i would do it today. in particular, i would take care, that the basic monophonic dsp class is not cluttered with any of the poly stuff. that should be a strictly optional add-on, on a need-to basis and not complicate the code for the mono case. on the framework side, we would need polyphonic modulation sources and targets - probably as subclasses of ModulationSource and ModulatableParameter

Would it be easy to implement a switch for the connection to be polyphonic or monophonic?

you mean, once a polyphonic mod-system is in place? well, in liberty, i do it like that: when a polyphonic source/output-pin is connected to a monophonic target/input-pin, it just receives the sum of all voices, when a monophonic output is connected to a polyphonic input, each voice gets the same data. pretty straightforward.

elanhickler commented 5 years ago

ok

RobinSchmidt commented 5 years ago

here are some of my ideas, how i would implement the dsp side of polyphony today (copied from my Ideas.txt file):

Polyphony:

protected:

// contains the shared state (sampleRate, cutoff, reso, etc): Ladder* master;

// voice-specific state: double coeff; // may be subject to key/vel scaling of cutoff double y[5] // each filter voice needs its own state variables //etc.

};


- the idea is that the voice specific state is typically small and the shared 
 state may be large for some kinds of objects and should not be stored 
 redundantly in each voice (can be accessed via the pointer to the
 template/matser object)
- to recursively compose Voice objects (a synth voice may contain 2 osc voices, 
 a filter voice, and 2 envelope voices, for example), the Voice class may 
 maintain an array of childVoice pointers
- the Voice baseclass may contain a pointer to a VoiceState object that stores 
 things like currentNote, currentVelocity, currentPitchBend, etc. - a pointer
 is used, such that this data is also not stored redundantly among a 
 SynthVoice's oscVoice, filterVoice, envVoice, etc. objects
- the overall design goal is to have a framework within which polyphonic 
 instruments can be built without storing any data redundantly
- another design goal is that the core dsp classes do not necessarily be aware
 of any polyphony stuff - for example class Ladder does not deal with any of 
 that - only the subclass LadderVoice introdcues this concept, so Ladder can 
 be used monophonically without the burden of voice-handling code
RobinSchmidt commented 5 years ago

disclaimer: i have no idea yet, if this design will turn out to be viable - it's just a first idea

elanhickler commented 5 years ago

array of pointers, why not just an array of objects? And start using smart pointers. Do you know how to use smart pointers? Either smart pointers or just plain objects. Edit: No more c pointers for you!

Edit: It's fine to use a pointer that simply points to data, just don't do Object obj = new Object()

elanhickler commented 5 years ago

I'll want to make my own Voice Manager class. I'm thinking I may want to implement polyphonic portamento where each voice slides to a new note.

Also, I want to have access to each voice, maybe with a dynamic_cast, to access the DSP parameters for each voice so each voice can be different.

elanhickler commented 5 years ago
class VoiceState
{
public:
    VoiceState() = default;
    ~VoiceState() = default;

    double pitchBend = 0;
    double channelPressure = 0;
};

class Voice
{
public:
    Voice() = default;
    ~Voice() = default;

    virtual void noteOn(int key, int vel) = 0;
    virtual void noteOff(int key, int vel) = 0;

    virtual void setAftertouch(double v) {}

    virtual void updatePitchBend() { /*do something with state->pitchBend*/ }
    virtual void updateChannelPressure() { /*do something with state->channelPressure*/ }

protected:
    VoiceState * state;
    double aftertouch = 0;
};
elanhickler commented 5 years ago

aftertouch and channel pressure don't make sense here because they are like modulation sources that need to be connected to a parameter. That's why your modulation system needs to be integrated somehow. How else would we assign things to parameters?

RobinSchmidt commented 5 years ago

array of pointers, why not just an array of objects?

because at compile-time, you don't know yet, of which Voice subclass the object will be. at least not inside the VoiceManager baseclass. you will know it in your subclass (like MySynth : VoiceManager) but if you want to implement things like noteOn in the VoiceManager baseclass, you can only use pointers.

RobinSchmidt commented 5 years ago

start using smart pointers. Do you know how to use smart pointers? Either smart pointers or just plain objects. Edit: No more c pointers for you!

for what do you want to use them? i guess, mainly for widgets and parameters? the problem is: in the current framework, when you do things like addWidget, addChildEditor, addParameter, etc. the framework take over ownership, so the framework already addresses what the whole smart-pointer idea also addresses: object deletion. and you shouldn't delete an object twice or else... if i had widgets as smart-pointers (or direct objects), i could not use the addWidget mechanism of the Editor class - i would need a new function and a new data structure in Editor to keep track of the un-owned widgets. hmmm..the reason why i use pointers for most of the gui stuff is because juce::Component, back then, just did it that way and i went along with it. maybe it's a bit of historical baggage. nowadays, you can have both, direct objects as well as pointers for child components. i actually think, the whole reason for which smart pointers exist - automatic deletion - is taken over by the jura framework anyway, sooo.....hmmm...dunno

elanhickler commented 5 years ago

the voice also has to tell the voicemanager when the voice is finished after note off

elanhickler commented 5 years ago

I'm not sure how to design this, I don't know how you're thinking of having polyphony but not have an envelope/filter/etc. per voice. Sounds like you want to optionally have those things per voice.

elanhickler commented 5 years ago
class Voice
{
public:
    Voice(int key, int vel) : key(key), vel(vel) { noteOn(key, vel); }
    ~Voice() = default;

    virtual void noteOn(int key, int vel) = 0;
    virtual void noteOff(int key, int vel) = 0;
    virtual double getOutput() const = 0;

    virtual void updatePitchBend(double v) { /*do something with state->pitchBend*/ }

    void endVoice()
    {
        manager->removeVoice(this);
    }

protected:
    int key;
    int vel;
    VoiceState * state;
    VoiceManager * manager;
};

class VoiceManager
{
public:
    VoiceManager() = default;
    ~VoiceManager() = default;

    void addVoice(Voice * voice)
    {
        voices.push_back(voice);
    }

    void removeVoice(Voice * voice)
    {
        for (auto iter = voices.begin(); iter != voices.end(); ++iter)
            if (*iter = voice)
                voices.erase(iter);
    }

    double getSample()
    {
        double out;
        for (const auto & voice : voices)
            out += voice->getOutput();
        return out;
    }

    vector<Voice *> voices;
};
RobinSchmidt commented 5 years ago

I don't know how you're thinking of having polyphony but not have an envelope/filter/etc. per voice

by letting the filter, envelope, etc itself being polyphonic already, like having a function double getSample(double in, int voice), for example. think of all the breakpoints in an envelope. each voice has the same breakpoints and many other data can be shared, too, actually, the only voice-specific things are the time index (where are we in the envelope) and maybe time constants that depend on key/vel. anything else can (and imho should) be shared among voices

RobinSchmidt commented 5 years ago

this is why i have the BreakpointEnvelopeData class. all BreakpointEnvelope objects for the same mod-target (amp, cutoff, ..) but different voices refer to the same "data" object. in this case, it may not be a big issue to duplicate the data, but still..i don't want to do it that way anymore. .and as said, there may be data-heavy sorts of dsp objects for which data sharing mandatory and we should have a consistent system for handling polyphony that applies to all dsp classes

RobinSchmidt commented 5 years ago

all this data is shared among voices

template<class T>
class rsBreakpointModulatorData
{

public:

  T scaleFactor;
  T offset;
  T bpm;
  T sampleRate;
  T minimumAllowedLevel;
  T maximumAllowedLevel;
  T endLevel;
  T minBreakpointDistance;
  T timeScale;
  T timeScaleByKey;
  T timeScaleByVel;
  T depth;
  T depthByKey;
  T depthByVel;

  int loopStartIndex;
  int loopEndIndex;
  int numCyclesInLoop;  // ?
  int editMode;

  bool loopIsOn;
  bool syncMode;
  bool endLevelFixedAtZero;

  std::vector<rsModBreakpoint<T>> breakpoints;
RobinSchmidt commented 5 years ago

but as said - i consider that as rather heavy handed. i think, the design i proposed above is more convenient and for new polyphonic synths, i'd rather implement that (...and at some point adapt all the old code to that scheme, too)

RobinSchmidt commented 5 years ago

the problem is that with that data class etc. even a monophonic breakpoint envelope will need to have all that code - then lying dormant - that is very undesirable

elanhickler commented 5 years ago

Can we lock down a polyphony design? I want to get started on implementing polyphony for my synths. The last major hurdle for my products is polyphony.

RobinSchmidt commented 5 years ago

i'll try to come up with something in the coming days. i actually want to be able to chain polyphonic modules in ToolChain, too - like chain a polyphonic osc with a polyphonic filter, also have some polyphonic modulators around - such that a basic polyphonic synth can be built from my AudioModules in ToolChain itself. the framework should allow that - because that would be very useful

RobinSchmidt commented 5 years ago

hmmm - i'm not so sure anymore about my design idea. maybe the "every voice is a new instance" idea makes more sense...i'm currently looking into how juce does it for inspiration:

https://docs.juce.com/master/classSynthesiserVoice.html https://docs.juce.com/master/classSynthesiser.html

...trying to get my head around their design. i find the introduction of a "SynthesiserSound" class really weird: https://docs.juce.com/master/classSynthesiser.html#details this will probably not help very much anyway to design a polyphonic modular modulation system as we need it. one problem with my proposed design above is that polyphonic feedback modulation would probably not be possible...or very inconvenient. maybe the approach of factoring out shared data into a referenced data-object as i did in BreakpointModulator (and otherwise just using arrays of the dsp objects) is indeed the most reasonable thing to do. hmmm

elanhickler commented 5 years ago

Are you considering that half the time one instance of a modulator will need to be applied to all voices (singular instance of that modulator)?

Consider how HISE or Kontakt does it. You add a new LFO and you can place it in the per-voice level or the global/instrument level.

I want to move toward having all my synths have a "click to add modulator" system so for example the interface does not show any LFOs, ADSRs, etc. until they are added one by one by the user. At that point the user can select per voice or instrument... and hopefully be able to change that on the fly.