Closed jjceresa closed 3 years ago
During MIDI instrument playing it could be possible to change the mapping of fx1 output to dry1 buffer.
A typo error: dry1
should be replaced by dry2
. So one must read:
During MIDI instrument playing it could be possible to change the mapping of fx1 output to dry2 buffer.
2.1) Now we need a new API that allows to change a particular fx unit parameter.
This new API functions will look like the actual one but with an additional parameter. For example:
-(a)The actual fluid_synth_set_reverb_roomsize
(fluid_synth_t synth, double roomsize) allows to change the roomsize of all fx unit.
-(b)The new API fluid_synth_set_fx_reverb_roomsize
(fluid_synth_t synth, int fx, double roomsize) allows to change only the
roomsize of unit fx (if fx >=0). With fx set to -1, this new API function will behave as the actual API (i.e roomsize is applied to all unit fx).
Doing this way allows to maintain backward API compatibility, but new applications should only make use of new functions API. Also at a later time it should be probably necessary to deprecate the actual API which become redundant with the new one.
What do you think ?
The API you suggest would be ok for me. I have a slight preference for calling it fluid_synth_set_reverb_roomsize2()
rather than fluid_synth_set_fx_reverb_roomsize()
. Not sure.
However, I am struggling in general whether we need that overall flexibility. Perhaps it would be helpful to propose the new APIs for 1.1 and 2.1 on the mailing list.
However, I am struggling in general whether we need that overall flexibility.
This overall flexibility is appreciated when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument. API 1.1 helps to mix instruments to appropriate load speakers or headphones.
Some musicians have difficulties to synchronize their playing when all instruments are hear only on a unique audio output. For example during playing repetition with 4 musicians (bassist, pianist(playing melody), guitarist, percussionist) , if the bassist has more difficulties than other musicians, both bass
and piano
audio can be temporarily mixed to the same output
. This helps the bassist learning to synchronize to the piano. Of course using API 1.1, requires an audio driver multi channels capable.
Perhaps it would be helpful to propose the new APIs for 1.1 and 2.1 on the mailing list.
Yes, i will propose this.
@jjceresa Is that a use-case that you have encountered yourself? Or do you know anybody who has expressed interest in that use-case?
Edit: I mean the use-case "when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument". Is that something you need? Or know someone who needs this? And if so... why? :-)
I mean the use-case "when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument". Is that something you need?
Yes, having only one software synth instance able to play multiple MIDI input is something I need. In home studio, this allows to group connection with existing external MIDI hardware (for example 2 keyboards).
API 1.1 helps to mix instruments to appropriate load speakers or headphones. Some musicians have difficulties to synchronize their playing when all instruments are hear only on a unique audio output. For example during playing repetition with 4 musicians (bassist, pianist(playing melody), guitarist, percussionist) , if the bassist has more difficulties than other musicians, both bass and piano audio can be temporarily mixed to the same output.
To me, that sounds like the "custom audio processing before audio is sent to audio driver" use-case as provided by new_fluid_audio_driver2()
. It's a little hard for me to understand at which level you intend to place this new API. But perhaps I should just wait once you're ready.
It's a little hard for me to understand at which level you intend to place this new API. But perhaps I should just wait once you're ready
As drafted in point 1 of first comment, this mapping API is intended to be placed at MIDI channel
level here some details:
/**
* Set mixer MIDI channel mapping to audio buffers.
* These mapping allows:
* (1) Any `MIDI channels` mapped to any audio `dry buffers`.
* (2) Any `MIDI channel` mapped to any `fx unit inpu`t .
* (3) Any `unit fx output` mapped to any audio `dry buffers`.
*
* The function allows the setting of mapping (1) or/and(2) or/and (3)
* simultaneously or not:
* 1)Mapping between MIDI `channel chan_to_out` and audio dry output at
* index `out_from_chan`. If `chan_to_out` is –1 this mapping is ignored,
* otherwise the mapping is done with the following special case:
* if `out_from_chan` is –1, this disable dry audio for this MIDI channel.
* This allows playing only fx (with dry muted temporarily).
*
* @param synth FluidSynth instance.
* @param `chan_to_out`, MIDI channel to which `out_from_chan` must be mapped.
* Must be in the range (-1 to MIDI channel count - 1).
* @param `out_from_chan`, audio output index to map to `chan_to_out`.
* Must be in the range (-1 to synth->audio_groups-1).
*
* 2)Mapping between MIDI channel `chan_to_fx` and fx unit input at
* index `fx_from_chan`. If `chan_to_fx` is –1 this mapping is ignored,
* otherwise the mapping is done with the following special case:
* if `fx_from_chan` is –1, this disable fx audio for this MIDI channel.
* This allows playing only dry (with fx muted temporarily).
*
* @param `chan_to_fx`, MIDI channel to which `fx_from_chan` must be mapped.
* Must be in the range (-1 to MIDI channel count - 1).
* @param `fx_from_chan`, fx unit input index to map `to chan_to_fx`.
* Must be in the range (-1 to synth->effects_groups-1).
*
* 3)Mapping beetwen `fx unit output `(which is mapped to `chanfx_to_out`) and
* audio dry output at index index `out_from_fx`. If `chanfx_to_out` is -1,
* this mapping is ignored.
*
* @param `chanfx_to_out`, indicates the fx unit (which is actually mapped
* to `chanfx_to_out`) whose output must be mapped with `out_from_fx`.
* Must be in the range (-1 to MIDI channel count - 1).
* @param `out_from_fx`, audio output index that must be mapped to fx unit output
* (which is actually mapped to `chanfx_to_out`).
* Must be in the range (0 to synth->audio_groups-1).
*
* @return #FLUID_OK on success, #FLUID_FAILED otherwise
*/
The 3 mapping described in the API (see previous comment) are represented in branch master fluidsynth\doc\FluidMixer.pdf
please see:
All 3 mapping type are set by the API in realtime:
I must admit I'm still wondering if the use-case justifies the additional public API functions and new features. I imagine that most people who want or need this kind of flexibility are already using either multi-channel output with something like jack, or use multiple fluidsynth VST or DSSI instances in a plugin host. In both cases, channel routing and different external effects per channel are already very easy to configure.
The first part (1) of the proposal has its own merit, I guess. We already have a limited ability to change the channel routing, and 1.1 makes this more explicit and flexible. I think if we implemented that, then we should also change the fluidsynth command-line options and rework the audio-groups / audio-channels settings.
But (2) sounds a little too much like going down the route of implementing more and more things that jack and plugin hosts already do very well. I feel like it would broaden the scope of fluidsynth too much.
But I also don't want to be the guy who always rejects new and larger extensions to the codebase. Maybe I'm too cautious here... so please don't take this as a downvote. It's more me thinking out loud about the scope of fluidsynth.
I imagine that most people who want or need this kind of flexibility are already using either multi-channel output with something like jack, or use multiple fluidsynth VST or DSSI instances in a plugin host. In both cases, channel routing and different external effects per channel are already very easy to configure
I am not using jack, nor VST nor DSSI. I am just using directly an audio driver multi channel capable (i.e using an audio device card multi channels ). please see https://github.com/FluidSynth/fluidsynth/pull/667 This allows to route/mix the instrument to separate load speakers.
But (2) sounds a little too much like going down the route of implementing more >and more things that jack and plugin hosts already do very well. I feel like it would broaden the scope of fluidsynth too much.
Point 2 is about internal unit fx parameters. When 2 distinct MIDI instruments are connected to 2 distinct internal unit reverb, I would expect that these 2 reverb unit have different parameters. Actually all internal fx unit have the same parameters which is a lack particularly when these 2 instruments are routed to distinct stereo speakers. As the issue is about internal fx unit lack, it appears in the scope of fluidsynth. Please, note also that API implementation (pr 672,673) require small part of code.
I didn't want to comment first to avoid biasing Marcus. However, I do share his concerns.
Let me go one step back to your use-case (the practical part of it, not the initial theoretical one):
For example during playing repetition with 4 musicians (bassist, pianist(playing melody), guitarist, percussionist) , if the bassist has more difficulties than other musicians, both bass and piano audio can be temporarily mixed to the same output
Ok, every musician plays his instrument on one MIDI channel. So we need 4 stereo channels, i.e. synth.audio-groups=4
. [And probably 4 effects units as well, to give each instrument its own reverb.]
API 1.1 helps to mix instruments to appropriate load speakers or headphones.
Understood. But let's replace "instruments" with "MIDI channels".
This use-case you have is absolutely valid. However, you're providing a bottom-up solution for it by adding more complexity into rvoice_mixer
. I don't think this is the correct way, because I don't see a reason for it.
Instead, I would have voted for a top-down solution:
new_fluid_audio_driver2()
providing a custom audio procession function.fluid_synth_process()
call based on the previous mapping.It is up to you whether the client program in 1. is your own demo program or fluidsynth's command shell. And ofc. 2. requires WaveOut and dsound drivers to learn to support new_fluid_audio_driver2()
.
Now coming back to your use-case: While the mapping of MIDI channels to audio channels is indeed rigid in rvoice_mixer
, it does not mean that it's rigid when calling fluid_synth_process()
. That's because the stereo buffers provided to fluid_synth_process()
can alias each other, i.e. they don't have to be four distinct stereo buffers. You could simply pretend to fluid_synth_process()
that you have four stereo buffers, while you only provide three distinct stereo buffers. That is, if the first and second stereo buffers alias each other, the bass and piano will be mixed with each other. Likewise, you can decide where to map the effects, because you can control, which buffers will be written to under the hood.
The only drawback of my solution I see is that it would also affect voices that are already playing. Whereas your solution only applies the new mapping to new voices. But does this issue really justify the added complexity of this PR? Esp. since we are talking about "temporary mappings", as far as I understand.
The drawback of your proposal is that it duplicates functionality (i.e. flexibility of buffer mapping) that is already provided by fluid_synth_process()
. A functionality that is not only internally, but also exposed via the public API.
So, in summary, I'm sorry to say, but given this API propsal, I don't see any "new features" that can't be already achieved with fluid_synth_process()
.
But (2) sounds a little too much like going down the route of implementing more >and more things that jack and plugin hosts already do very well. I feel like it would broaden the scope of fluidsynth too much.
Point 2 is about internal unit fx parameters. When 2 distinct MIDI instruments are connected to 2 distinct internal unit reverb, I would expect that these 2 reverb unit have different parameters. Actually all internal fx unit have the same parameters which is a lack particularly when these 2 instruments are routed to distinct stereo speakers. As the issue is about internal fx unit lack, it appears in the scope of fluidsynth. Please, note also that API implementation (pr 672,673) require small part of code.
The changes proposed in #673 is actually ok for me, but let's talk about this later separately.
Now coming back to your use-case: While the mapping of MIDI channels to audio channels is indeed rigid in rvoice_mixer, it does not mean that it's rigid when calling fluid_synth_process().
Yes, fluid_synth_process()
expose a powerful mapping/mixing for audio buffers
feature. In fact this fluid_synth_process() functionality is not the same that MIDI channels mapping to audio channels
exposed by rvoice_mixer
in this PR. rvoice_mixer MIDI channels mapping is the only one naturally synchronous with MIDI notes played by musician. This makes a MIDI channel mapping
change realtime possible during the song (without audio artifact) (regardless if the mapping change is required by the musician while is is playing or another person devoted to the recording).
The only drawback of my solution I see is that it would also affect voices that are already playing. Whereas your solution only applies the new mapping to new voices. But does this issue really justify the added complexity of this PR? Esp. since we are talking about "temporary mappings", as far as I understand.
yes, the rvoice_mixer realtime MIDI channels mapping solution allowing easy direct recording
possible (i.e without artifact) during the musicians playing.
So, in summary, I'm sorry to say, but given this API propsal, I don't see any "new features" that can't be already achieved with fluid_synth_process()
Please, be aware that I am sensible and aware of the powerful of fluid_synth_process()
, but I don't think that the realtime MIDI channel mapping proposed by this PR is a duplicate functionality that can be easily achieved using fluid_synth_process().
For the same reason, I also think that actual mix mode
of rvoice_mixer
is a powerful feature that should stay inside rvoice_mixer
.
Now coming back to the your application proposed above (client that administrates the buffer mappings
and create the custom audio driver).
This kind of application could be useful at the audio mixing stage
for the recording of a song on distinct tracks. This application could be used by the recording team that prepares a fixed buffer mapping configuration
for audio track separation and post recording processing purpose. Once the track configuration is prepared beforehand, the song played by musicians can start. The song will be dispatched to the tracks. It seems that fluid_synth_process()
is appropriate for this client application.
I didn't want to comment first to avoid biasing Marcus. However, I do share his concerns.
I summary,
in this PR. rvoice_mixer MIDI channels mapping is the only one naturally synchronous with MIDI notes played by musician. This makes a MIDI channel mapping change realtime possible during the song
Ok, but I don't understand why this so important? Is it only to avoid potential audio artifacts? If so, wouldn't a simple fade in / fade out easily solve this?
Also, there is another point that I still don't get: Assuming you have a pianist playing on MIDI chan 0
and a bassist on MIDI chan1
. Each musician wears a headphone. In the beginning, the headphone of the bassist plays the bass only, that of the pianist the piano only. Now, imagine you map the piano onto the bass, right? Then the bassist will hear both instruments, but the pianist will hear only silence, won't he?
So, I really think what you need is the Jack audio server. It's solving exactly this issue. It should be compilable for Windows as well, have you had a look at it?
This makes a MIDI channel mapping change realtime possible during the song Ok, but I don't understand why this so important?
For example, when a musician plays a phrase, the mapping allow him to play temporarily dry audio "solo" (the fx is muted) or fx audio "solo" (the audio dry is muted) and then come back to both (dry + fx) during the same phrase (using foot or key switch).
Also, there is another point that I still don't get: Assuming you have a pianist playing on MIDI chan 0 and a bassist on MIDI chan1. Each musician wears a headphone.
When musicians plays together they never use headphones they use load speakers to be able to hear each other because they need mutual learning. So the mapping of 2 instruments on the same output has only sense if this output is connected to loud speakers. Some musicians prefer to use headphones physically connected on the same output than load speakers because they simply don't want to be bothering by the room acoustic.
So, I really think what you need is the Jack audio server. It's solving exactly this issue. have you had a look at it?
When a musician play an instrument he is busy with this instrument and don't want to be disturbed by the use of a GUI application. Jack is well suited for predefined off line I/O audio connection settings, but not adapted for a musician playing in realtime.
JJC, excuse me for being so persistent here, but I would really like to understand if we are talking about a concrete need you have or if this is more like "it would probably be nice for other people".
So for me to understand where you are coming form:
JJC, excuse me for being so persistent here, but I would really like to understand if we are talking about a concrete need you have.
Yes ~Tom~ Marcus, this is a real concrete need I have. I am a musician keyboardist (not professional) and my main concern is the ability a musician have (using maximum 10 fingers and 2 foots) to achieve what he need.
Do you yourself play together with other musicians on a single Fluidsynth instance, either via loud speaker
I play with other musician and would like to do so during training lesson using only one instance and only one audio card multi channels connected to load speakers. Also I would like to get the minimum of hardware/software complexity as possible using this alone (at home).
And have you yourself experienced the need to change channel mappings and switch off fx in real-time during such a music session?
Yes, for example, when playing 2 instruments (i.e a flute [+ bit of reverb] accompanied by a piano [+ bit of reverb]) this gives the dimension (illusion) of 4 instruments present. This is possible also when the 2 instruments are played by only one musician on only one MIDI keyboard.
Do you actually use the Fluidsynth API when playing in this context?
Yes, for the "solo fx/dry on/off" experiment I used a tiny craft application that intercepts MIDI events coming from the MIDI driver and then call fluidsynth API. Doing this kind of "solo" experiment using the fluidsynth console command line doesn't allow to get the expected real-time feedback.
my main concern is the ability a musician have (using maximum 10 fingers and 2 foots) to achieve what he need.
So, given your 10 fingers and 2 foots, how exactly do you intend to change the channel mapping while playing the piano? You talked about a footswitch. That's a little too vague. I would like to get some more details for a better understanding: What does the footswitch trigger? A shell command? Or an API call? And does it trigger a simple pre-defined mapping, or does it somehow dynamically react to your situation? (Sry, I really have no clue.)
So, given your 10 fingers and 2 foots, how exactly do you intend to change the channel mapping while playing the piano? You talked about a footswitch. That's a little too vague.
During the playing of the instruments notes with the hand , the foot-switch (or hand push-button switch) triggers a pre-defined logic mapping that will use the MIDI channel of the current note. As you say the reaction is a dynamic to the playing situation (i.e based to the current MIDI channel of the instrument played by the hand). The mapping is executed on API call. Before playing the song, if the keyboard is split in 2 instruments piano and flute (i.e 2 key-range ,each key-range assigned with its own MIDI channel 1 and 2), then during the playing, if a mapping is triggered this mapping will act on the piano or the flute.
the foot-switch triggers a pre-defined logic mapping that will use the MIDI channel of the current note. [...] The mapping is executed on API call.
Ok, how about using Jacks API for manipulating ports to rearrange the mapping?
And if you wanted to temporarily disable fx, you could add a default modulator whose secondary source is a switch that pulls CC91 and CC93 to zero.
Sorry for being so nit-picky here, but I really think that this channel mapping use-case should be implemented on a high level. Not by adding more complexity into rvoice_mixer and exposing it to the user. I'm afraid that this will become a burden as soon as we need to change an implementation detail deep down in the mixer and we find that for some reason we cannot do this because it would break the API usage.
Ok, how about using Jacks API for manipulating ports to rearrange the mapping?
I'm afraid that this will become a burden as soon as we need to change an implementation detail deep down in the mixer and we find that for some reason we cannot do this because it would break the API usage.
Ok, I understand your point of view as a maintainer. Unfortunately, Jack covers only a small part of the real need of musicians using MIDI. Also please, be aware that actually rvoice_mixer fixed channel mapping
cannot be solved by changing the MIDI channel of the MIDI controller that send MIDI messages nor by doing any substitution of the channel value somewhere between the MIDI driver and the fluidsynth instance.
Sorry for being so nit-picky here, but I really think that this channel mapping use-case should be implemented on a high level. Not by adding more complexity into rvoice_mixer and exposing it to the user.
Just a note, I still doesn't understand why you think this PR is adding "more complexity into rvoice_mixer" ?. This PR does a simple straightforward substitution of the expression (channel % z)
by the value of a variable
set by an API.
So, we can close this PR. I will continue with a custom version of fluidsynth.
I still doesn't understand why you think this PR is adding "more complexity into rvoice_mixer" ?. This PR does a simple straightforward substitution of the expression (channel % z) by the value of a variable set by an API.
Currently, there are no constraints of what and how we map things in rvoice_mixer. The user doesn't need to know / doesn't need to care about that. Thus we can use a simple fixed mapping. Now you want to make this mapping variable and expose it to the user. Hence we will get a bigger API and constrain ourselves to rvoice_mixer's current implementation.
If there were no other options to achieve your use-case, I would buy it. However, given the number of alternative approaches on a higher level (new_fluid_audio_driver2()
, Jack, or as Marcus said VST, DSSI), I'm cautious with this step.
So, we can close this PR. I will continue with a custom version of fluidsynth.
Seems like we need a third (or fourth) opinion. @mawe42 What do you think? Are you "still wondering if the use-case justifies the additional public API functions and new features."? Should we discuss that feature on the mailing list? You can also tell me I'm mistaken, then I will give up my reservations on that topic.
What do you think? Are you "still wondering if the use-case justifies the additional public API functions and new features."?
I'm really in two minds about this. On the one hand I think that the MIDI channel to buffer mapping should be limited to two modes of operation:
Option 1 is probably what 90% of users will ever need. Option 2 is for the small number of people who want to use fluidsynth for advanced things like real-time multi-channel live performance. For those advance use-cases, there are really good tools available (Carla, jack + friends, Ableton Live, ...) that can simply take fluidsynth multi-channel output (or multiple fluidsynth instances) and offer very flexible and user-friendly real-time control for live performance. And if you miss functionality (for example to switch an instrument to a different output, controlled via a MIDI foot pedal), you can either search for a plugin that does what you want, or quickly write a plugin yourself.
Using those real-time performance hosts also has the advantage that you can add any effect to the outputs, and set them up so that you can control the effects via MIDI foot pedals or other controllers as well.
So... when I follow this train of thought, I would argue against these changes and would instead propose to rip out existing functionality. Get rid of the audio-groups
modulo stuff, even get rid of the whole LADSPA subsystem.
(Side note: I proposed a rewrite of the LADSPA system because I wanted to use additional effects with fluidsynth in my embedded application. It served quite well until recently... but I now have some additional requirements that mean I need more flexibility. So I will switch over to jack and multi-channel output instead. Which is something I should have done from the beginning, I think.)
But I said I'm in two minds about this. So the other way to think about this is: we already have (most of) the features that JJC wants, so let's expose them to the user in the most useful way possible.
We already offer limited control over the mapping via the audio-groups
setting. But that is quite restrictive and complicated to understand for the user, I think. Giving users more explicit control over the routing sounds good. But we should take the existing interfaces (e.g the audio-groups
setting) into account as well and clean that up at the same time.
And we already have the separate fx units, so adding new API functions to control their parameters separately makes sense. Here I would like to ask: if we allow individual fx unit control via the API and shell, should we also expose this via the settings?
And if we want to actively support real-time manipulation, to support the use of standalone fluidsynth in live performances, then we need to provide a way for non-API users to access the functionality as well, I think. And no, the shell does not really count. :-)
So maybe we need to implement an OSC handler for real-time live performance control?
So the other way to think about this is: we already have (most of) the features that JJC wants, so let's expose them to the user in the most useful way possible. Giving users more explicit control over the routing sounds good.
Ok, I'll buy it. I'll review #672 in a more detailed manner tomorrow.
But we should take the existing interfaces (e.g the audio-groups setting) into account as well and clean that up at the same time.
Cleaning it up... do you have something specific in mind?
if we allow individual fx unit control via the API and shell, should we also expose this via the settings?
IMO, no. I see the settings more like a basic initialization of the synth, that should be easy to understand and use. If one needs to set details, one should use the synth API. Esp. since you might want to manipulate those parameters in real-time. (I never liked these "real-time" settings. They only make synth API calls under the hood. I prefer direct API usage.)
So maybe we need to implement an OSC handler for real-time live performance control?
Open Sound Control - sounds interesting. I never really had a close look into it, so I don't know. But I think we should keep this kind of real-time manipulation at a minimum. As you said initially, there is already a bunch of software out there for that purpose.
Ok, I'll buy it. I'll review #672 in a more detailed manner tomorrow.
Thanks. Please no need to hurry .
Thanks you both for your useful feedback and your time.
Currently, there are no constraints of what and how we map things in rvoice_mixer. The user doesn't need to know / doesn't need to care about that.Thus we can use a simple fixed mapping.
That is right for the default settings value of audio-groups
(1) and effects-group
(1). As soon the user envisage to augment these settings to acquire its need he is faced to the fixed mapping (modulo stuff) which is quite straightforward when both audio-groups and effects-group have the same value . When both settings are different, things begin difficult to understand and the user is now aware he is seriously constrained by this default fixed mapping.
If I want to make this mapping variable and expose it to the user. Hence we will get a bigger API and constrain ourselves to rvoice_mixer's current implementation.
Right. I am aware that this new mapping API will move the user constraint toward the developer side that is now constrained to the rvoice_mixer current implementation. I am also100 % aware of any fair about the risk of breaking the mapping API if we change some details in the rvoice_mixer . Actually rvoice_mixer implementation behaviour is fully and only defined by the semantic of settings audio-groups
and effects-groups
and the new mapping API respect this semantic (ie the new API is only dependent of audio-groups and effects-groups). The only things I see that could break the API would be to suppress one of this settings. So the 2 questions I wonder are, 1) did we intend to suppress the actual multiple stereo feature of rvoice_mixers ?. 2) did we intend to suppress the presence of more than one internal unit fx ?.
But we should take the existing interfaces (e.g the audio-groups setting) into account as well and clean that up at the same time.
May be you did a typo mistake and you are talking about audio-channels
?, in this case , please have a look at https://github.com/FluidSynth/fluidsynth/pull/663
IMO, no. I see the settings more like a basic initialization of the synth, that should be easy to understand and use.
These settings initialize all fx unit with the same values. Individual fx unit initialization seems not necessary. (please note that settings API (key,value) accepts only one value
for each named key
. )
May be you did a typo mistake and you are talking about audio-channels ?, in this case , please have a look at #663
No, I really mean audio-groups
... and really also effects-groups
. Above you write:
That is right for the default settings value of audio-groups (1) and effects-group(1). [...] When both settings are different, things begin difficult to understand and the user is now aware he is seriously constrained by this default fixed mapping.
That's what I mean. The relationship between audio-channels
, audio-groups
and effects-groups
are quite hard to understand, in my opinion. And even hard to explain. And my feeling is that the only reason they are implemented in this way is because using a single number and doing modulo on the channel number was simple to implement.
So in my opinion, if we have shell commands to give users explicit control over the MIDI channel to audio channel, MIDI channel to fx unit, and fx unit to audio channel mappings, then that should be the one and only way to configure different channel mappings. Completely remove the audio-groups
and effects-groups
settings. Only keep audio-channels
and add a new effects-units
setting. By default they only only create additional output channels and effects units, but they are unused. To use them, the user has to create a configuration file with shell commands that change the default from "all on the first output and first effects unit" to something else.
So in essence I think we should design this feature from the user perspective. What do users need, how can they achieve what they want. And then provide one and only one way to configure it for each interface (fluidsynth exec, API).
And then maybe think about adding OSC or MIDI SysEx commands for the real-time control that you wanted.
Thinking about this some more. Instead of simply implementing what somebody currently needs, I would rather design this feature. So think about what usage scenarios we want to support. Then decide what the best way would be to support and (more importantly) how to actually configure fluidsynth for those scenarios.
I can think of the following:
(Edit: added fourth option)
Are there any more?
how to actually configure fluidsynth for those scenarios. I can think of the following:
The scenarios you describe can already be achieved with fluid_synth_process()
, as far as I know.
I would rather design this feature.
Ok, sounds good. But in this case, we should continue this discussion on the mailing list. I don't think that we three can reach a common sense here that covers all use-cases, while it's still easier to understand and use than the current implementation.
The scenarios you describe can already be achieved with fluid_synth_process(), as far as I know.
That might be, but it's not what I was thinking about. I tried to look from the user perspective. So try to imagine what people need. Then decide which of those use-cases we want to support. Then decide how the user-interface should work. And only then look at the existing implementation and decide whats possible and how.
I don't know... it could also be that I'm thinking too big here.
Ok, sounds good. But in this case, we should continue this discussion on the mailing list. I don't think that we three can reach a common sense here that covers all use-cases, while it's still easier to understand and use than the current implementation.
Good idea.
I don't know... it could also be that I'm thinking too big here.
We will see once brought to the mailing list. Perhaps you Marcus could / should start a discussion. I'm probably too biased.
Completely remove the
audio-groups
andeffects-groups
settings. Only keepaudio-channels
and add a neweffects-units
setting.
I am not sure to understand. You probably means:
audio-groups
functionality but rename this setting audio-channels
.effects-groups
functionality but rename this setting effects-units
.audio-channels
settings.
Is it right ?.| So think about what usage scenarios we want to support.
I see only 2 types of scenarios,
buff count
and fx unit count
).
I see this default mapping as the only one predictible and comprehensible because it is independent of buff count
and fx unit count
.Then decide what the best way would be to support and (more importantly) how to actually configure fluidsynth for those scenarios.
At this stage for the user I see only the basics way
, (i.e using the mapping API, and companion shell commands). imo, this should be sufficient for now. Later we could add, high level interface (i.e through sysex or osc) only at appropriate time when necessary. The real needs of these high level interface will probably appear progressively in the time from the user experience using the basics (API, shell commands). I mean this should come from the user experience, and this will take long time. At short terms we should provide only the basics ways
then let the user do what he want with these and wait.
Default: All MIDI channels on single stereo output with effects mixed in.
Default: All MIDI channels on single stereo output with effects mixed in.
Please ignore this sentence.
I've just tried to write a post for fluid-dev to start the discussion about this feature, but I'm having a really hard time in trying to come up with a good explanation and (more importantly) with a good question to ask the community.
For starters, I was unsure which audio drivers actually support multi-channel output at the moment. Because I couldn't find any documentation about this, I created the following wiki page: https://github.com/FluidSynth/fluidsynth/wiki/Audio-Drivers
Please have a look at that page and let me know if it is correct. I think such a page would be useful for our users, so I would like to link it into the main documentation in the wiki. Any objections to that?
So, coming back to the discussion... I'm really unsure what to ask the community. JJC has said he has a real use-case for real-time channel-mapping. And we seem to agree that most of what we need to support this is already available in FS, so it would be a good change to make.
The open question is how the API for this feature should look and behave. And here I'm really unsure what JJCs plans for the real-time control is. @jjceresa can you elaborate a little more on how you intend this real-time control to work? I mean the actual practical usage of the feature? Would you create a little script that listens to MIDI events and send text commands to the fluid shell via telnet? Or would you write a wrapper program that uses fluidsynth via the API and implement your own MIDI event handler to control the channel mapping?
And there is the "big question" I came up with: instead of patching another layer on top of the current multi-channel logic, shouldn't we design the multi-channel output from the ground up instead? Which also means revising the audio-channels
, audio-groups
, effects-channels
and effects-groups
settings, which are really hard to understand for normal users, in my opinion. When trying to write the post starting the discussion, I also attempted to explain how multi-channel currently works. And that turned out to be really hard to understand, so I dropped that idea.
I'm unsure on how to proceed here. I feel quite strongly that adding the extra layer of complexity in this PR on an already hard to understand feature is problematic. So I really think we should come up with a new and unified way to control the MIDI channel to output channel and MIDI channel to effects-unit mapping. Once that is clean, we should add the real-time controls to change the mapping.
I think such a page would be useful for our users, so I would like to link it into the main documentation in the wiki.
Good Idea, thanks for this page.
Please have a look at that page and let me know if it is correct.
waveout (like dsound) support multi-channel too.
A lot of unix-like driver (alsa,...) yet does not support multi-channels but this could be done. For example, for alsa, I looked how to add this support but never proposed a PR because I cannot test locally here (please see https://github.com/FluidSynth/fluidsynth/issues/665).
waveout (like dsound) support multi-channel too.
Thanks, I've updated the page.
A lot of unix-like driver (alsa,...) yet does not support multi-channels but this could be done.
Sure, I just wanted to document the current state.
So, coming back to the discussion... I'm really unsure what to ask the community.
audio-channels, audio-groups, effects-channels and effects-groups settings, which are really hard to understand for normal users.
I think first the actual fx mixer behaviour should be be documented a bit in plain text. I can do that. This should show where is the mixer in the overall audio audio path. Also this document must explain that "multi-channels" in fluidsynth is simply multiple stereo output
nothing else. This document is required to understand what these settings
represent inside the mixer. (A small presentation of the mixer should be added to the wiki pointing to this document).
Then later it will be possible to expose to the mail-list the API proposal based on this document.
@jjceresa can you elaborate a little more on how you intend this real-time control to work? I mean the actual practical usage of the feature?
1) simple case: let 2 musicians playing with their MIDI controller (ewi, keyboard) connected to the same synth instance with an audio driver with 2 stereo outputs. The fluid instance must appears like if each musician have its own synthesizer and its own stereo output. The synth instance is configured before playing the song using shell commands.
2) more elaborated case involving reverb mapping to output or reverb mute/solo during the song: This is relevant to only one musician (i.e the ewi player) and triggered by this musician. In this case a MIDI event handler is implemented to intercept MIDI message coming from the MIDI driver and call API to control the ewi MIDI channels mapping.
I must admit I still don't understand where your use-case is coming from. I mean I understand what you want to achieve, but I don't understand why you want to achieve it in this way. But maybe that isn't really important... Rewriting the multi-channel configuration interface has merit in itself, and your use-case would naturally benefit from that, I think.
I think I have an idea how to bring this to the mailing-list now. I will write a proposal what I think would be a really nice and clean way to configure the channel and buffer mapping.
And I must admit that I never saw a use-case for multiple synth instances. However, the use-cases JJC has just described seem like a very suitable case for creating two synth instances. The purpose of squashing this functionality into a single instance is not quite clear to me.
the use-cases JJC has just described seem like a very suitable case for creating two synth instances.
right, and squashing these 2 synth instances into one requires only one audio driver driving only one audio card(having at least 2 stereo outputs of course). Otherwise with 2 synth instances we are forced to create 2 audio drivers which require 2 distinct audio cards and also we lose any possibility of mapping/mixing any MIDI channels to any audio output. Side note: making use of only one MIDI driver multi device capable, use case 1 can be extended to more than 2 MIDI USB input devices without requiring external MIDI box merging. All these also leads to simplify hardware requirement.
I must admit that I still have some reservations regarding this feature. However, I have found a potential use-case and would like to hear what you think whether it would fit in here:
Think of MIDI files: Usually, they are built the following: You have one MIDI track that only plays the piano. You have another track that plays only strings. Now you assign the piano track to MIDI channel 0, and the string track to channel 1. Simple and straight forward, great.
Now, I found that the developers of Mario Artist Paint Studio complicate things here: They cut those two tracks into many individual pieces. And then they randomly assign those tiny-tracks to either channel 0 or channel 1. That way, the piano sometimes plays on channel 0 and sometimes on channel 1, meanwhile the strings play on some other channel. And they do this with all 16 channels in a completely time-random way! (probably for copy protection reasons)
In order to obtain a nicely rendered multichannel piece of audio, where each instrument really plays on its dedicated stereo channel, one could
fluid_synth_process()
, orAny thoughts?
And they do this with all 16 channels in a completely time-random way! (probably for copy protection reasons)
I think they do that to simulate the moving of instrument but of course it would preferable to ask the developers directly.
The reorder buffer assignments before calling fluid_synth_process() to obtain the rendering you described can be controlled by the mapping set by this API which reside in the mixer. For example when some user code is about to call fluid_synth_process(), this code could call the getter functions that should be exposed by the synth's mixer to get the mapping information (dry and fx) and doing the according buffer assignment.
I don't see any incompatibility with the suggested API and fact that the mapping set by this API could be exploited outside of the mixer.
I don't see any incompatibility with the suggested API and fact that the mapping set by this API could be exploited outside of the mixer.
I'm still struggling with the redundancy: one could simply reorder the buffers provided to fluid_synth_process()
, or one could use this new API. I know there is a tiny difference in both approaches, as your API nicely works for realtime mapping, which may be useful when e.g. notes are still playing in release phase. But I'm still not sure whether this justifies this kind of redundancy.
@mawe42 Do you have any preference, comment or thought about my comment above? If not, no problem. Then I would try to implement that kind of "channel unscattering" for Mario Artist Paint Studio by a) using JJCs proposed API, and b) reorder buffers directly. (But this would probably take a few weeks/months... )
Sorry for the late reply! My initial reaction to your Mario use-case was: that sounds like a perfect use-case for a more elaborate MIDI router. Something stateful, so that you can store values from previous messages and use them as replacements in following messages. It might be overkill... but would probably a fun project :-)
Thinking about it some more it sounds like a job for a short Python script, reading the original MIDI data and spitting out a cleaned up version of it with each instrument on it's own track. Why would you want to convert it on the fly in FluidSynth?
There surely are various approaches to solve my problem. I was just trying to find a possible use-case for this API. But I'm still not convinced, sorry :/
Same here. Of course it could be a(nother?) use-case for this API, but it feels a little bit like looking for a problem to fit the solution.
A proposal to gain benefice of mixer and fx unit capabilities.
1)Actually in fluidsynth the
mixer
offers the potential to map distinctMIDI instrument
to distinct stereobuffer
. 2)Similarly distinctMIDI instrument
can be mapped to distinctFx unit input
.1)When the musician think for a MIDI instrument mapping configuration, he decide of 3 mapping at MIDI level:
Actually these mapping are rigid and aren't real time capable while MIDI instrument are played. 1.1) A new simple API should offer mapping flexibility in real time situation allowing:
For example, this allows:
MIDI instrument i1 mapped to dry1 buffer.
MIDI instrument i1 mapped to fx1 input, and fx1 output mapped to dry1 buffer.
MIDI instrument i2 mapped to dry2 buffer.
MIDI instrument i2 mapped to fx2 input., and fx2 output mapped to dry2 buffer.
During MIDI instrument playing it could be possible to change the mapping of fx1 output to dry1 buffer. So that we can hear that instrument fx1 leave dry i1 and now is mixed with fx2 of instrument i2.
Another real time feature is the ability for a musician to play temporarily only fx (dry is silence) or only dry (fx is silence).
2) As distinct MIDI instrument can be mapped to distinct Fx unit input, means that now we could expect to have distinct parameter for distinct fx unit. For example MIDI instrument i1 could have a room-size reverb different than room-size reverb of instrument i2. Actually there is only one API that set the same parameter value to all fx units. 2.1) Now we need a new API that allows to change a particular fx unit parameter.
I will propose a PR for these new API (1.1) and (2.1).