jkominek / piano-conversion

Hardware, and some firmware, for acoustic piano to MIDI controller conversion.
Other
41 stars 6 forks source link

Toy/barebones synthesizer + audio output module #34

Open jkominek opened 3 years ago

jkominek commented 3 years ago

A sort of pie-in-the-sky idea, that isn't part of any milestone, or blocking anything:

A board, conforming to the overall I2C/Qwiic scheme, which presents itself like the I2C UART on the MIDI output board, basically taking a stream of MIDI data written to register 0x00. But instead of streaming that MIDI data elsewhere, it runs it through a simple synth, generating an analog signal, and feeding it to any of line outs, headphone amps, or power amps for real speakers.

I don't have any interest in trying to implement even a sample-based piano synth in hardware, let alone a fancier synth than that. But some simple 32 voice sine wave + some harmonics that responds to note on velocities, note off, and sustain pedal could maybe be useful? I'm imagining a situation where you've taken a unit to a tech for them to work on the regulation of the system after confirming that whatever your issue is, is present even when using this simple synth.

It would also serve as a reference for anyone who wanted to build a more advanced synth. For instance, it is possible to get a Raspberry Pi to act as an I2C slave device. So, you could make a board that brings in the Qwiic connection, has plugs for a Raspberry Pi compute module (both for it to mount, and so you can put a display/keyboard/mouse on it), takes the audio output from the module, runs it through amps. Then you could slap Pianoteq on the Pi module, and have a largely standalone system with "real" synthesis and "real" audio.

davidedelvento commented 3 years ago

I think this is a fantastic idea, for even much more than you say. For example, my NU1 I does not have any "long lasting" organ-like sound. Every sound it has is a piano-like decaying one. It's a big hassle to do slow harmony exercises in that setup, because the sound disappears earlier than I need it. Even a simple sine-wave would be very useful for that (assuming it can handle polyphony, 16 notes suffice for the purpose). A possible problem which I have with phone apps that I use to workaround the problem is with the lowest or highest frequencies, which become gradually inaudible, likely because of the phone line-out amplification. The problem might be worse at low frequencies if you go with sine waves without many harmonics (which is a problem on acoustic pianos too). I suspect the setup you describe having similar problems?

It would be phenomenal (but probably too much?) if you can somewhat have a small number of programmable voices, even if just in firmware (rather than in a user interface). For example each voice could have a number of harmonics AND the attack-decay-sustain-release envelope, and maybe even temperament/tuning, and maybe the harmonics are different in different registers (low vs high notes). But that is probably too much and could quickly skyrocket to full-blown synth... which you rightly do not want to spend your time one -- so likely best to be left to the external R-Pi. Unless... the reason I mention is that I don't know what would be easy/possible on the hardware you have in mind. If hardware would easily support this kind of things, but the problem is just firmware/software, I'll be happy to take a look and see what it takes to make it, and perhaps contribute.

Last, why would one want to use the R-Pi as an I2C slave device rather than connecting to this piano controller via regular MIDI-over-USB? To be clear: this is a sincere question, not a rhetorical one: I don't know the answer and/or the advantage of one vs the other solution.

jkominek commented 3 years ago

I think this is a fantastic idea, for even much more than you say. For example, my NU1 I does not have any "long lasting" organ-like sound. Every sound it has is a piano-like decaying one. It's a big hassle to do slow harmony exercises in that setup, because the sound disappears earlier than I need it. Even a simple sine-wave would be very useful for that (assuming it can handle polyphony, 16 notes suffice for the purpose).

What, don't you like Organteq? 🤪

A possible problem which I have with phone apps that I use to workaround the problem is with the lowest or highest frequencies, which become gradually inaudible, likely because of the phone line-out amplification. The problem might be worse at low frequencies if you go with sine waves without many harmonics (which is a problem on acoustic pianos too). I suspect the setup you describe having similar problems?

I've not designed an audio amplifier before, but I feel like you'd simply run into problems with your speakers before the amplifier. Subwoofers and such only go so low. Pianos and organs being physical large objects just have an advantage with producing low frequencies.

Adding a line-level output that's a low-passed version of the signal, so that you've got something to route to your subwoofer is probably the only thing that's within the scope of this project.

Unless... the reason I mention is that I don't know what would be easy/possible on the hardware you have in mind. If hardware would easily support this kind of things, but the problem is just firmware/software, I'll be happy to take a look and see what it takes to make it, and perhaps contribute.

I don't have any particular hardware in mind at the moment. I could probably squeeze a 16-32 voice synth with ADSR and a few harmonics onto another one of the STM32H743s I'm already using without trying very hard. The code would of course be open like everything else, so once you got a development environment up, you could easily sit there reflashing the part, and immediately hitting keys to hear what it sounds like.

Using something more DSP-oriented would probably broaden the options available. Before I actually go anywhere with this I'll look around, there have to be some other open source MIDI synths. If any of them look good I'll just lift whatever they're doing and put the interfaces on it that I want. 😃

Last, why would one want to use the R-Pi as an I2C slave device rather than connecting to this piano controller via regular MIDI-over-USB? To be clear: this is a sincere question, not a rhetorical one: I don't know the answer and/or the advantage of one vs the other solution.

Oh, hah, yes. My plan every other time I thought about this was to use USB, as you describe. I just forgot that USB was even an option when I was coming at it from the stand point of incrementally improving a toy synth board into something "real".

That said, there might be less latency (🤪) if the Pi was driven by the rest of the system. USB has to poll devices, and the USB stack has a lot more stuff in it... I'm not sure. It might be amusing to look at.

davidedelvento commented 3 years ago

What, don't you like Organteq? :zany_face:

Actually I love Organteq! In my opinion is so much closer to a real organ than Pianoteq is to a real piano. My only problem is that I'd use it so little that I can't justify its price. I'd happily have bought it if it had the same Pianoteq price structure or if it came at a discount if bought with Pianoteq too... or at its current price it included some hardware preconfigured and optimized to run it on (I know it should work on R-Pi, but as I said I need it only for some exercises, so I have zero interest of spending lots of money AND lots of time on it)

Using something more DSP-oriented would probably broaden the options available.

My experience with them is 20y old so I'd better keep my mouth shut in this regard :)

Before I actually go anywhere with this I'll look around, there have to be some other open source MIDI synths.

The most popular is https://www.fluidsynth.org/ which is what I use on my Android Phone is based on. Not sure it'd compile for a DSP though.... The documentation says

FluidSynth runs on Linux, Mac OS X, and the Windows platforms, and support for OS/2 and OpenSolaris is experimental. It has audio and midi drivers for all mentioned platforms but you can use it with your own drivers if your application already handles MIDI and audio input/output. This document explains the basic usage of FluidSynth and provides examples that you can reuse.

It does not mention Android, but here it is, so we may get it to work on other "obscure" hw too.

On the other hand, all this synth business can be fun and we might achieve something, but we should be wary of spending lots of time, money and our energies on something which might be better served by an off-the-shelf cheap thingy. Yet the simplest sine-wave idea is certainly worth it.

I've not designed an audio amplifier before, but I feel like you'd simply run into problems with your speakers before the amplifier.

Well, that may be the case for this project, but in what I do, I use the NU1 internal speakers (which sound just fine in all the piano range from low bottom to the top). I feed the Android's fluidsynth output into the NU1 line-in via the phone line-out/headphone jack. I suspect the latter is the problem (but it could be the synth itself or any other place in the pipeline -- certainly not the speakers and not the NU1 final amplifier, but it could be the preamp for the line in).

Last, why would one want to use the R-Pi as an I2C slave device rather than connecting to this piano controller via regular MIDI-over-USB?

there might be less latency :zany_face: if the Pi was driven by the rest of the system. USB has to poll devices, and the USB stack has a lot more stuff in it... I'm not sure. It might be amusing to look at.

Actually that would be a huge plus. Latency on a computer piano is a huge problem for many people, so if I2C is faster than USB that's a boon, especially for a R-Pi which is just marginally performant for this purpose. As for everything, it depends how much faster for how much work...

jkominek commented 2 years ago

Leaving some relevant-looking links here:

davidedelvento commented 2 years ago

https://github.com/julbouln/musicboard

If my understanding is correct this provides almost everything, the only missing piece is that the audio out is USB only rather than signal, but that should be easily added from the DAC. It claims it needs lightweight resources, so... Would you deploy it on a dedicated board as in your initial comment, or do you have enough spare computing power on some other board? And also, fluidsynth is so feature rich that raises the question of what sort of UI should one utilize to control it? I think that would open the can of worms of button/LED vs display vs use-your-smartphone.... Do you want to go there? I guess you can answer: no just use one sound as configured last time you connected it to a computer or smartphone... Phone? Wait, if you have the phone in your pocket (who doesn't), there's even a phone version of fluidsynth which works pretty well, I've used it often. And that is why I stopped looking at the STM32 port after the first difficulty: you can just hook up the phone to the MIDI-over-USB of the piano-conversion and you have the MIDI, the synth, plenty of room for samples, the DAC, the amplifier, a speaker (ahem), a headphone amplifier and its corresponding jack... Possibility to record MIDI and/or audio, connect to the internet to send the data somewhere else... Hard to justify efforts to develop solutions (and money if a separate board needs to be added to the BOM) just to compete with all of that these days....

If anything, I'd make a MIDI-SysEx-base myTechnician to access settings, and a corresponding desktop and mobile app (similar in concept to what I proof-of-concept'ed here) - but I digress, that should be a separate discussion

jkominek commented 2 years ago

To be clear, I'm mostly just leaving notes here for completeness' sake. I don't currently intend to do any work on this task. Happy to discuss it, and provide guidance if somebody comes along and thinks it sounds like fun. But that'll be it.

If my understanding is correct this provides almost everything, the only missing piece is that the audio out is USB only rather than signal, but that should be easily added from the DAC.

Looks like the musicboard project (the more recent of the materials) has analog output through an external DAC (the "CODEC"), which is what I'd do, because it's what I've seen in all the schematics for professional products. :) Also it is really easy.

It claims it needs lightweight resources, so... Would you deploy it on a dedicated board as in your initial comment, or do you have enough spare computing power on some other board?

However light it is, I don't intend to add any audio processing to the main board, which is where it would have to reside. (I'm actively stripping everything I possibly can off of the main board, and feeling pleased with myself for my success.) I expect to have it broadcast MIDI over one or more I2C buses, to be picked up by whatever else. Could be a hypothetical synth board; far more likely to be the MIDI DIN board or ethernet/bluetooth adapter.

And also, fluidsynth is so feature rich that raises the question of what sort of UI should one utilize to control it?

To whoever implements this: I expect to have non-MIDI USB endpoints available for talking to the main board from a PC, over which you could command it to send arbitrary stuff via I2C.

Do you want to go there?

I'm not going anywhere near this. :)

you can just hook up the phone to the MIDI-over-USB of the piano-conversion and you have the MIDI, the synth, plenty of room for samples, the DAC, the amplifier, a speaker (ahem), a headphone amplifier and its corresponding jack...

Yup, and I've got a USB-to-lighting cable around some place and Ravenscroft on my phone so that I can test that this works.