Guitar plugin made with JUCE that uses neural network models to emulate real world hardware. This plugin uses a LSTM model to recreate the sound of real amps and pedals. You can record samples and train models from the plugin. Tone models are saved in .json format. Model training is accomplished using Tensorflow/Keras. The main improvement from the original SmartAmp is that training takes less than five minutes on CPU (vs. 8 hours on GPU) for comparable sound quality. Training has also been integrated into the plugin. For best sound, use with additional Reverb/IR.
An alternate way to train models for SmartAmpPro is through Colab. Upload the "train_colab.ipynb" or "train_colab_mse.ipynb" script and .wav file(s) to Colab, then follow the instructions in the notes. No installs necessary, only a Google account to train models in the cloud.
Hear the audio demo on Youtube!
The python dependencies for training tones are listed in the requirements.txt
file. It's
recommended to use the "pip" package manager to install these dependencies. Pip is included in
Python3.4 and up. Python3.6 and Python3.8 were used during development and either version is
recommended for SmartAmpPro. Tensorflow2.4 does not support Python3.9 as of 1/22/20.
#In the command terminal:
pip install -r requirements.txt
# OR run the included "install_requirements" script (.bat for Windows, .sh for Mac/Linux)
Note: You can still use the plugin without installing the python dependencies, but the "Train Tone" button will not work.
All .json models, python training files, and .wav samples created from the plugin will be saved to the "userApplicationDataDirectory" defined by Juce for the OS. There is no automatic data cleanup of this directory. If you modify the "train.py" file and want to revert to the original, remove the the .py file and it will be re-installed the next time you open the plugin.
userApplicationDataDirectoy Locations (default location for models, python scripts, and .wav samples for training):
Windows 10: "C:/Users/<username>/AppData/Roaming/GuitarML/SmartAmpPro"
OSX (default): /Users/<username>/Library/GuitarML/SmartAmpPro
OSX (for Garageband, substitute appropriate version):
/Users/<username>/Library/Containers/com.apple.garageband10/Data/Library/GuitarML/SmartAmpPro
Linux: /home/<username>/.config/GuitarML/SmartAmpPro
#The following directories are created in the SmartAmpPro folder:
/captures # Where all recorded .wav files are saved
/install # Where Python dependency installation scripts are stored
/models # The model output of the training scripts (Keras .h5 model, generated .wav samples, generated plots)
/tones # Where all .json tone files are saved
/training # Where python training scripts are stored
Export or import tone files (with .json extension) to and from the plugin. Click the "Add Tone" button to open up a file select dialog box. Select your .json tone file (or multiple files) to import them into the plugin. This simply copies the tone file to your SmartAmpPro directory defined above. The tone is now an option in the drop down box in the plugin. The "Export Tone" button copies the tone files from the SmartAmpPro directory to the chosen location.
Note: The original SmartAmp/PedalNetRT .json files are not compatible with this plugin because it is a different machine learning model. Only use tone files trained from SmartAmpPro.
The Python dependencies from the requirements.txt
file must be installed on your system (not a virtual env)
for the tone training to work.
On Windows, a terminal window should pop up and execute the training script. Mac suppresses the terminal and runs in the background. The percent complete status should update in the plugin. If it remains at 0, or seems to get stuck, stop the training by clicking the button.
Note: To troubleshoot or run training manually, navigate to the SmartAmpPro directory, open a cmd prompt and run train.py with the appropriate command.
# For a stereo (two channel) wav file recorded from SmartAmpPro "Start Capture" button:
python train.py stereo.wav model_name
# For two mono wav files:
python train.py input.wav model_name --out_file=output.wav
Note: Currently clicking "Stop Training" stops the internal plugin processes for training, but the separate training application will keep running until complete.
Note: The guitar effect will keep running during training, but due to the extra CPU usage it may be better to switch the effect off until training is finished.
Note: You can modify the train.py script to test different parameters, but it may produce undesired results in the plugin. Recommended to only modify the number of epochs, learning rate, or the number of hidden units of the LSTM layer.
See the GuitarLSTM repo for more information on how training works, and tips for creating sample recordings.
<full-path-to>/json-develop/include
<full-path-to>/NumCpp-master/include
<full-path-to>/boost_1_75_0/boost_1_75_0
Dev Note: The above dependencies were chosen to facilitate rapid prototyping. It is possible to accomplish the same thing using only Juce and the standard c++ library.
Note: Make sure to build in Release mode unless actually debugging. Debug mode will not keep up with real time playing.
This plugin is designed to showcase the speed at which the guitar models can be trained. Using models with larger parameters can improve accuracy at the cost of longer training time. The c++ inference can also be rewritten to be more efficient and handle larger models at real time speeds. Future work will focus on improving the c++ inference code to handle larger models, and optimizing the LSTM model to handle more complex signals (high gain/distortion).
The current model training takes a "snapshot" of a particular amp/pedal/rig. It is possible to use multiple recordings of different knob settings and train a single model that can interpolate between the settings. For example, a model with an adjustable gain parameter, to more accurately simulate the behavior of the system. This will also be a focus of future work.
Other possibilities: