GuitarML / NeuralPi

Raspberry Pi guitar pedal using neural networks to emulate real amps and effects.
https://guitarml.com/
GNU General Public License v3.0
1.07k stars 59 forks source link

Use model from SmartAmpPro #8

Closed R0Wi closed 3 years ago

R0Wi commented 3 years ago

Hi and first thank's for this really interesting project! I was wondering which model formats are currently supported by NeuralPI. If i got things right currently the black box modeling of an amp or distorsion pedal has to be done by https://github.com/Alec-Wright/Automated-GuitarAmpModelling. For me the workflow behind https://github.com/GuitarML/SmartAmpPro looks a lot easier so i asked myself if it's possible to train a model on my desktop machine using SmartAmpPro and then transferring it to my PI using the provided scripts in this repo?

Thank's for your feedback 😎

mishushakov commented 3 years ago

thanks for the feedback, glad you're taking part!

SmartAmpPro is using different machine learning architecture, therefor they're incompatible see this medium post for explanation: https://keyth72.medium.com/guitarml-faq-6b18abc1116c

i'd suggest to stick with the latest architecture (Automated-GuitarAmpModelling), because it's the most performant and portable

those models can be run on Desktop with the Chameleon plugin

as for training, we have a more user-friendly fork available: https://github.com/GuitarML/Automated-GuitarAmpModelling

if you'd like to stick with SmartAmpPro, you could probably cross-compile it for Raspberry Pi and load it on ElkOS instead of the built-in NeuralPi software (NeuralPi.vst)

hopefully this answers your questions

R0Wi commented 3 years ago

Hi @mishushakov, thanks for your detailed insights!

My personal target would be to use my PI as an digital fx device which is capable of black box modelling external analog devices (like amps and distorsion pedals) and then use these models live instead of the "real" devices. I became aware of this project because basically i was searching for a DIY alternative to the relatively new Neural DSP Quad Coretex (https://neuraldsp.com/quad-cortex). In this proprietary box they say they use machine learning algorithms for modelling external gear.

They have two quite simple steps:

  1. Modelling: plug your fx-device or amp into the box and run a modelling phase. I think they then just send a special sound file through the circuit and capture the results. After that they build the "model" and save it into the box.
  2. Use the model: after the modelling phase has completed you now have a new model in your collection which can be used live instead of the "real" gear.

Like you already proposed in https://github.com/GuitarML/NeuralPi/issues/4#issuecomment-880191097 this could be even more interesting when combined with MODEP.

So to summarize: i think it would really be great if we have modelling and usage inside the Raspberry PI (maybe remotely controlled by a tablet or smartphone). This would make the RPI a standalone box for guitar players who just want to use their gear digitally. Might be worth adding a new issue or discussion for that?

I know that it's a hard task to compete with such a professional gear by using a Raspberry PI but nevertheless it would be interesting to see how far we get :smile:

Closing this for now since the question was answered, thanks again :+1:

mishushakov commented 3 years ago

cool, you can already control NeuralPi with OSC client, for example: TouchOSC, just connect it to the IP:Port as shown in the plugin

i'm not aware if anyone has attempted to train the models on the Pi, so it might be worth a try

if it works, we could publish a apt package and include (reference) all the required training dependencies + the .vst (or .lv2 for that matter), so that training could be done in the plugin