Closed GuitarML closed 2 years ago
It seems I can control the channel fader in Reaper, but it's extremely lagged. Using 35% cpu on a Core I5.
I see references to 44.1, but it seems to behave better at 48k? Not at all at other rates? Also, on my Motu 24 i/o it seems to only work with a buffer size of 128, smaller or larger creates problems...?
Does the default Reaper project settings for .wav file affect the plugin? Should it be changed from 64bit to one of the other options?
Good questions, yes for now the audio should be 44.1, haven’t tried others but that’s interesting that 48 sound good too, or better. Training data is done in FP32, but I don’t think it would affect running the plugin. Try reducing that bit depth setting in reaper, it could be that the high bit depth combined with the plugin is just too slow. I’ve been able to get it working with different buffer sizes on my Focusrite 2i2, so not sure about only 128 working for you.
I’d like to find ways to speed up the algorithm so that it’s comparable to commercial plugins. One way to do that is by using a different machine learning approach besides WaveNet, like an LSTM model, but that would be a complete overhaul of the training and inference code. Another way to speed it up is by using a smaller WaveNet model, but at the cost of accuracy, so a better approach may be to improve the training quality of the WaveNet model.
So it seems the GUI prefers 44.1. But it still "locks out" the Reaper gui if SmartAmp's gui is on screen?
I know I've got many plugins that use JUCE with no problems?
VST3 seems to be a sketchy thing with some plugins, the overhead for extra routing possibilities has to be minimized/optimized somehow?
It's interesting how it behaves differently than convolution and circuit modeling sims. It seems to have a more realistic "presentation", but with some curious quirks. It maybe needs a low pass filter going in to prevent some sort of alias-foldback high frequency hashing?
With the "gain" turned up it's interesting that as a signal fades, there is a "secondary" sound that is less bright that becomes apparent, that sounds much more gained-out/distorted. An artifact from the training samples, or a byproduct of pytorch?
All of the "controls" are post-sim, right? Unity gain would be with the gain controls all the way up, and the tone controls or "?" (I'd prefer to choose my own tone frequencies/bandwidth etc.).
What were your amp settings when you made the data set? I notice there are 2 peaks in the high end it seems, maybe at 2k and what I presume is the tone stack crossover point on the Blues Jr. (1.5k?)?
It's interesting it passes pitched harmonics seemingly out to 8k, which I think probably has a lot to do with my perception of "realism" from it (seems like a lot of sims seem to make more noise as things go up to 3-4k and beyond).
Super interesting, again thanks for bothering to do this, I've been waiting to try something like this since reading that paper...
Yes I also noticed the secondary sound on the clean channel, I’d like to understand why that happens too. I haven’t noticed that on any models trained from direct signal pedal recordings, just on the clean one from a mic’d amp.
Amp EQ settings were all centered, max gain for the lead channel and 3 (out of 12) gain for the clean channel. On the sim, I apply EQ and gain pre model, and master level post model.
There’s definitely a lot more to figure out when it comes to improving the model training and implementation, so I’m looking forward to hearing results from other people testing and tweaking it.
Fixed in version 1.3 by handling graphics properly.
When using Reaper in Windows 10 (unconfirmed with other DAWs) the GUI locks and won’t switch to other plugins in the FX view. Observed that this is not a problem in Reaper on Linux Ubuntu.