lenmus / lenmus

LenMus Phonascus is a free open source program (GPL v3) for learning music. It allows you to focus on specific skills and exercises, on both theory and aural training. The different activities can be customized to meet your needs.
http://www.lenmus.org/
GNU General Public License v3.0
76 stars 15 forks source link

JACK interface #9

Open cecilios opened 8 years ago

cecilios commented 8 years ago

Currently, MIDI output is just a direct connection to the graphic card. The problem is that, in consumer PCs, generated sound quality is not good enough for more advanced ear training exercises. In addition, teachers who are musicians, often have a very educated and sensitive ear for the quality of sound and dislike the sound created this way.

My thoughts:

After some analysis, I have come to two conclusions:

  1. I think the best approach is to include a JACK interface in LenMus. JACK (http://jackaudio.org/) is the de facto standard for interconnection audio applications and systems in the Linux world, and it is also available for MS Windows. By providing this support in LenMus, users can redirect the generated MIDI flow to any other application or device, allowing the use of MIDI software synthesizers capable of generating better quality of sound, or to send MIDI you to other better sources of sound, perhaps hardware.
  2. An interface for VST virtual instruments could be studied and programmed. But not sure if it is worth the work because VST is a proprietary specification from Steinberg and only distribute the SDK for Windows and Apple. Moreover, there is free software for using VST devices with JACK input. Therefore, if LenMus could offer a JACK interface the development of a VST interface would not be required. A direct VST interface would be useful mainly for Windows users who do not want to install JACK. My conclusion is to discard the development of an VST interface.

In conclusion, developing a JACK interface will solve the problem.

This is a task that can be completed in short time and that, practically, it does not requires any knowledge about LenMus internals.

kolewu commented 8 years ago

I don't think, that jack is the right answer, cause it is used for routing audio (and maybe midi) to real audio devices. For now lenmus (lomse?) has only midi output.

On Linux it's very easy to start a software synthesizer and then connect the midi output of lenmus with the midi input. To get sound out of lenmus I just start timidity as ALSA sequencer client:

timidity -iA

find the right ports

aconnect -io

and then connect lenmus output to timidity input for example on my computer it would look like

aconnect 14:0 130:0

Or use a graphical patchbay like the one integrated in QjackCtl (it has an alsa sequencer patchbay, too and not only manages jack connections).

What can be a nice addition though, is an internal sound engine that generates audio from midi like FluidSynth that is for example used by MuseScore or my own simple ChoirPractice (sorry, it's only available in german, but the music is international ;-) ). Together with a good SoundFont this sounds really good.

cecilios commented 8 years ago

Lenmus generates MIDI events, but Lomse do not. Lomse just invokes a call back on each midi event; it is responsibility of the app using lomse to generate the sound when receiving the call back. For this it can do whatever it likes: generate a midi event, direct synthesis of sound, etc.

Currently lenmus relies on an external midi synthesizer. I use timidity and it is what I recommend when asked.

Adding an internal sound generator to LenMus would be a great feature, as it will facilitate generating the sound without having to rely on an external midi synthesizer. This is, sometimes, a source of trouble for Linux beginners. But I don't know if FluidSynth can be integrated with LenMus. To my knowledge FluidSynth is an external midi synthesizer, so its role is similar to Timidity. But if FluidSynth can be integrated with LenMus package it will simplify the live to many users. In that case it will be worth opnening an issue to integrate FluidSynth in LenMus.

Please confirm that FluidSynth can be integrated with LenMus (I understand that you've done it with ChoirParctice), and I will open an issue for this.

But in any case, FluidSynth doesn't solve the main problem of allowing to use VST instruments. For this it is necessary either:

The proposal to add JACK is for this last point: facilitate to route the midi output to any sound device. Jack can route audio and midi, not only audio. Therefore, having a JACK interface will be of help for using VST instruments as well as to facilitate midi routing to other devices.

kolewu commented 8 years ago

The easy answer first: Yes, it is definitely possible to include fluidsynth as a library and use it to produce sound from midi files or events. Here is the API documentation: libfluidsynth API

I don't understand the first paragraph: Does lomse create midi events and send them with a callback to the calling app? I think, it's time to play a bit with the example applications.

Now to the difficult part: Why do you think, jack is needed for midi routing capabilities? The midi produced now is still routable (at least on linux through the alsa sequencer interface). So maybe you are thinking about more convenience for the user?

There is some value in having specialized applications like synthesizer and patchbays (like Catia), but for simple use cases like "I only want to play some notes" there might be some demand.

So maybe a first implementation of this feature would be like

and if this is a configurable option then it will not disturb the power user that want's to use her superduper midi epiano ;-)

AFAIK alsa sequencer and jack (midi) are independent from any running application, so the simple router in lenmus doesn't interfere with the sophisticated 3rd party router like Qjackctl or Catia.

cecilios commented 8 years ago

Going in order with your points and questions:

Yes, it is definitely possible to include fluidsynth as a library and use it to produce sound from midi files or events.

Great! Then FluidSynth must be added to LenMus. Added issue #37

Why do you think, jack is needed for midi routing capabilities?

My knowledge of the audio/midi architecture in the different operating systems is very limited. When I started LenMus I was looking for a platform independent, simple solution for generating sound via midi. The chosen solution was to use the portmidi library. It is very simple to use, it supports Linux, Windows, OSx and probably more, and it is well maintained. The user program can get a list of available midi devices (software synthesizers, real instruments, midi through, etc.) and decide where to send the events. But I don't know how portmidi works and if its output can be routed to other midi devices. I think the answer is no, and so I though that some midi routing interface would be useful. This is the reason for proposing a JACK interface. But probably I'm wrong and midi routing can already be done without doing anything! But I don't know how to do it. Perhaps in Linux you can do it with Alsa, but I don't know if this is possible in other operating systems and I would like a platform independent solution. That's the reason to propose JACK.

Does lomse create midi events and send them with a callback to the calling app?

In order to give more freedom to the application using lomse for deciding how to do things, I always have tried to avoid imposing specific solutions. So I thought it would be nice not to generate midi events directly in lomse, but just inform the application of the need to generate sound (via callback). And let the application to generate the sound as it prefers: by generating midi events, by direct synthesis, other, etc.

It is true that transferring to the application the burden of generating the events is not a good idea in most cases. In fact, the tutorial on lomse playback is more a tutorial on generating midi than on lomse playback!

As I write this, I'm thinking that perhaps lomse should generate the midi events directly and not transfer to user application this work. This will simplify using lomse. And perhaps the best approach would be to offer both solutions: either generate a midi event or to inform user application and let it to generate the sound or whatever it likes to do. In case we decide to move to this approach, the solution for generating midi events should be based on a well maintained library available in most operating systems. And should allow routing the events.

More brain storming. Now I realize a third alternative: not only generate the midi events but also the sounds, perhaps using FluidSynth.

Another idea: as LenMus uses Lomse, all the effort in adding FluidSynth and on improving sound in LenMus could be transferred to Lomse. LenMus will use it anyway and all applications using Lomse will benefit!

Uhm! But, how the user app will choose the option for sound generation?:

  1. A lomse build time configuration option: Not good. Default build is what will be available the system. Then, applications using lomse will be forced to statically link with lomse library built with the specific options needed by the application. This will create a lot of problems and questions from users trying to use lomse.
  2. A lomse real time configuration option. This adds more dependencies for building the library. But offers more flexibility and does not creates problems for the application using lomse.
  3. A plug-in architecture. When configuring lomse (in real time), the application using lomse will decide how sound is going to be generated. For this, a sound plug-in will loaded and lomse will be informed. Some of these plug-ins will be provided by lomse, for instance: a plug-in based on using FluidSynth; another one based on portmidi and an external synthesizer; a third one could be just a bridge so that user application receives the callbacks and takes the responsibility for generating sound. This solution will offer a lot of flexibility and will not create more dependencies on the lomse library. And at the same time, by providing on-the-shelf a couple of plugins, will allow user applications not having to deal with sound generation (if they prefer this).

So, my preliminary conclusions:

Sorry for this long post. What do you think about these brainstorming ideas?

cecilios commented 8 years ago

I have started to document the Lomse API. If you would like to understand how Lomse generates sounds please see this http://lenmus.github.io/lomse/sound_generation.html