Open cecilios opened 8 years ago
I don't think, that jack is the right answer, cause it is used for routing audio (and maybe midi) to real audio devices. For now lenmus (lomse?) has only midi output.
On Linux it's very easy to start a software synthesizer and then connect the midi output of lenmus with the midi input. To get sound out of lenmus I just start timidity as ALSA sequencer client:
timidity -iA
find the right ports
aconnect -io
and then connect lenmus output to timidity input for example on my computer it would look like
aconnect 14:0 130:0
Or use a graphical patchbay like the one integrated in QjackCtl (it has an alsa sequencer patchbay, too and not only manages jack connections).
What can be a nice addition though, is an internal sound engine that generates audio from midi like FluidSynth that is for example used by MuseScore or my own simple ChoirPractice (sorry, it's only available in german, but the music is international ;-) ). Together with a good SoundFont this sounds really good.
Lenmus generates MIDI events, but Lomse do not. Lomse just invokes a call back on each midi event; it is responsibility of the app using lomse to generate the sound when receiving the call back. For this it can do whatever it likes: generate a midi event, direct synthesis of sound, etc.
Currently lenmus relies on an external midi synthesizer. I use timidity and it is what I recommend when asked.
Adding an internal sound generator to LenMus would be a great feature, as it will facilitate generating the sound without having to rely on an external midi synthesizer. This is, sometimes, a source of trouble for Linux beginners. But I don't know if FluidSynth can be integrated with LenMus. To my knowledge FluidSynth is an external midi synthesizer, so its role is similar to Timidity. But if FluidSynth can be integrated with LenMus package it will simplify the live to many users. In that case it will be worth opnening an issue to integrate FluidSynth in LenMus.
Please confirm that FluidSynth can be integrated with LenMus (I understand that you've done it with ChoirParctice), and I will open an issue for this.
But in any case, FluidSynth doesn't solve the main problem of allowing to use VST instruments. For this it is necessary either:
The proposal to add JACK is for this last point: facilitate to route the midi output to any sound device. Jack can route audio and midi, not only audio. Therefore, having a JACK interface will be of help for using VST instruments as well as to facilitate midi routing to other devices.
The easy answer first: Yes, it is definitely possible to include fluidsynth as a library and use it to produce sound from midi files or events. Here is the API documentation: libfluidsynth API
I don't understand the first paragraph: Does lomse create midi events and send them with a callback to the calling app? I think, it's time to play a bit with the example applications.
Now to the difficult part: Why do you think, jack is needed for midi routing capabilities? The midi produced now is still routable (at least on linux through the alsa sequencer interface). So maybe you are thinking about more convenience for the user?
There is some value in having specialized applications like synthesizer and patchbays (like Catia), but for simple use cases like "I only want to play some notes" there might be some demand.
So maybe a first implementation of this feature would be like
and if this is a configurable option then it will not disturb the power user that want's to use her superduper midi epiano ;-)
AFAIK alsa sequencer and jack (midi) are independent from any running application, so the simple router in lenmus doesn't interfere with the sophisticated 3rd party router like Qjackctl or Catia.
Going in order with your points and questions:
Yes, it is definitely possible to include fluidsynth as a library and use it to produce sound from midi files or events.
Great! Then FluidSynth must be added to LenMus. Added issue #37
Why do you think, jack is needed for midi routing capabilities?
My knowledge of the audio/midi architecture in the different operating systems is very limited. When I started LenMus I was looking for a platform independent, simple solution for generating sound via midi. The chosen solution was to use the portmidi library. It is very simple to use, it supports Linux, Windows, OSx and probably more, and it is well maintained. The user program can get a list of available midi devices (software synthesizers, real instruments, midi through, etc.) and decide where to send the events. But I don't know how portmidi works and if its output can be routed to other midi devices. I think the answer is no, and so I though that some midi routing interface would be useful. This is the reason for proposing a JACK interface. But probably I'm wrong and midi routing can already be done without doing anything! But I don't know how to do it. Perhaps in Linux you can do it with Alsa, but I don't know if this is possible in other operating systems and I would like a platform independent solution. That's the reason to propose JACK.
Does lomse create midi events and send them with a callback to the calling app?
In order to give more freedom to the application using lomse for deciding how to do things, I always have tried to avoid imposing specific solutions. So I thought it would be nice not to generate midi events directly in lomse, but just inform the application of the need to generate sound (via callback). And let the application to generate the sound as it prefers: by generating midi events, by direct synthesis, other, etc.
It is true that transferring to the application the burden of generating the events is not a good idea in most cases. In fact, the tutorial on lomse playback is more a tutorial on generating midi than on lomse playback!
As I write this, I'm thinking that perhaps lomse should generate the midi events directly and not transfer to user application this work. This will simplify using lomse. And perhaps the best approach would be to offer both solutions: either generate a midi event or to inform user application and let it to generate the sound or whatever it likes to do. In case we decide to move to this approach, the solution for generating midi events should be based on a well maintained library available in most operating systems. And should allow routing the events.
More brain storming. Now I realize a third alternative: not only generate the midi events but also the sounds, perhaps using FluidSynth.
Another idea: as LenMus uses Lomse, all the effort in adding FluidSynth and on improving sound in LenMus could be transferred to Lomse. LenMus will use it anyway and all applications using Lomse will benefit!
Uhm! But, how the user app will choose the option for sound generation?:
So, my preliminary conclusions:
Sorry for this long post. What do you think about these brainstorming ideas?
I have started to document the Lomse API. If you would like to understand how Lomse generates sounds please see this http://lenmus.github.io/lomse/sound_generation.html
Currently, MIDI output is just a direct connection to the graphic card. The problem is that, in consumer PCs, generated sound quality is not good enough for more advanced ear training exercises. In addition, teachers who are musicians, often have a very educated and sensitive ear for the quality of sound and dislike the sound created this way.
My thoughts:
After some analysis, I have come to two conclusions:
In conclusion, developing a JACK interface will solve the problem.
This is a task that can be completed in short time and that, practically, it does not requires any knowledge about LenMus internals.