Closed teotigraphix closed 3 years ago
If I understand correctly, you want to design a model that represents notes both visually and of course, audibly. I would suggest creating a value Object which has a property which is an MWEngine AudioEvent instance (like SynthEvent or SampleEvent).
Define your remaining properties and methods to accommodate the creation of a note in the piano roll and internally translate these those event positions (either using buffer samples or seconds, see Wiki). Whenever the position of such an event changes horizontally, update the positions of their AudioEvents accordingly.
The Sequencer is indeed a container for any event. You don't have to consciously think about "shoving in events as needed", but merely construct events and define at what point in time they start playing and invoke addToSequencer() once. Whenever the Sequencers playhead reachers that point in time, it will become audible. Just ensure that when an event should play at a different time, its position properties are updated.
For non-quantization, you have absolute freedom to position an event at any point in time, regardless of whether strictly speaking that is the most musical interval. Choose your preferred way of performing this calculus (either at the sample level or using seconds) and you can "offset" your events from the grid as you please
Yes, this seems like it was more of a mental gymnastics thing for me to over come.
I was using float beats for my Caustic apps. So ALL my logic in my other apps are based around floting point numbers in the sequencers not samples.
I am thinking about writing and adapter and see if all my existing note edit logic I have built up with float beats can be transferred to samples without any loss of detail.
I think the idea was Rej gave me OSC messages to communicate with the core, those messages then "posted" float beats to the OSC parser.
Once the parser handled them, the 2nd level C++ layer handled everything in float betas. But once in process() etc, everything was converted to audio sample locations.
So the ONLY thing that knew about audio samples was the deep core audio engine.
From what you have said, it seems like there really is 3 ways to set a note event;
So 1.5 beats which is measure 1, beat 1, "half way through first beat"
Does this make sense to you?
So "2 beats" would be measure 1, start of beat 2 and "7 beats" would be measure 2, start of beat 3 ? Makes sense.
What you need to know is the duration of a measure in seconds. You can leverage the helper math in bufferUtility to do this. All you need to know is the current sample rate, and the tempo and time signature of the sequencer, which are properties you have defined during setup.
int samplesPerBar = ( float ) bufferUtility.getSamplesPerBar( sampleRate, tempo, timeSigBeatAmount, timeSigBeatUnit );
float secondsPerBar = bufferUtility.bufferToSeconds( samplesPerBar, sampleRate );
int samplesPerBeat = samplesPerBar / timeSigBeatAmount;
float secondsPerBeat = bufferUtility.bufferToSeconds( samplesPerBeat, sampleRate );
Note: when your song switches time signature or tempo, you must recalculate the above. If you change tempo or time signature when switching between individual measures (instead of adjusting this globally), you must recalculate this more often. Basically you must ensure that above values for samplesPerBeat and secondsPerBeat are valid for the current measure the Sequencer is playing.
// convenience method to position events using beats
void positionEventUsingBeats( AudioEvent event, int eventStartMeasure, float startOffsetInBeats, float durationInBeats ) {
// 1. note we subtract one beat as beats start at 1, not 0.
float startOffsetInSeconds = ( startOffsetInBeats - 1 ) * secondsPerBeat;
// 2. assumption here is that all measures have the same duration
// if not, this must be calculated differently, see explanation below
// note we subtract one beat as measures start at 1, not 0 for consistency
float eventStartMeasureInSeconds = ( eventStartMeasure - 1 ) * secondsPerBar;
// 3. use positioning in seconds
event.setStartPosition( eventStartMeasureInSeconds + startOffsetInSeconds );
// 4. set duration in seconds, note we subtract one beat as beats start at 1, not 0.
event.setDuration(( durationInBeats - 1 ) * secondsPerBeat );
}
Where timeSigBeatAmount is the "3" in 3/4 and timeSigBeatUnit is the "4" in 3/4.
When setting the events start offset, be sure to add the duration of the measures preceding it. For instance to position an event a half beat in length on the fourth beat of the third measure:
positionEventUsingBeats( event, 3, 4f, .5f );
Note on using SampleEvents: if the duration of the event should equal the total duration of the sample, it is preferred you replace step 4 in positionEventUsingBeats with:
event.setSampleLength( event.getBuffer().getBufferSize());
EDIT: Yes, you have the concept of float beats correct, it's just another linear key. What I found great about this is it has a 1 to 1 relationship in UI components like piano rolls, so it's really easy to get that data into the UI.
Thanks! I am going to try this out in a proto app.
I have plenty of "material" that I know exactly how it should work, so now I will get it to work.
I'll update with my progress.
Again excuse my noobness but I am trying to wrap my head around the sequencer implementation.
I guess giving an example is easier then asking the question.
I am trying to figure out how I would maintain patterns(with step length) per channel and piano roll per channel, then be able to real-time sequence the pattern data along with the piano roll of each sequencer data model that holds the individual events.
I think I can pretty much wrap my head around the pattern stuff.
1) Is this sequencer just a container for any event and it's the client's responsibility to separate the data on the client side and somehow shove events into the main sequencer as needed?
2) How do you setup non quantized beats without using a grid?