Closed chipmcdonald closed 3 years ago
Try to change 14th line in train.py: max_epochs=args.max_epochs, gpus=args.gpus, row_log_interval=100 to: max_epochs=args.max_epochs, row_log_interval=100 it works for me on free colab account when there are no gpu/tpu available.
Yes that should work, but I’ll also add a cpu flag to make it selectable from the command line.
That seems to be working.
I'm getting
d.py:37: UserWarning: Unsupported ReduceOp
for distributed computing.
warnings.warn(*args, **kwargs)
But it appears some epochs go to 100%, others not at all - I'm presently presuming that is to be expected?
The warnings above seem to be innocuous?
Thanks!
Thanks, that got me to making a .json file, but despite putting it in my Reaper root directory, my plugins directory, the vst3 directory SmartPedal resides in, SmartPedal finds no models ("no choices")....?
That seems to be a common problem people are having, try removing and adding back the plugin in the FX window and check if they show up. If not, I'll need more details to try to reproduce the error (Reaper install location and version, operating system (assuming Windows 10?) and anything else you might think is relevant). Thanks!
FYI - a "cpu" flag has been added to PedalNetRT, enter "--cpu=1" with train.py to use.
Thanks. Default reaper install path, Win10 X64. I've tried adding/removing SmartPedal from a track to no avail. Tried moving SmartPedal.vst3 to a different directory, put my .json in there with it - Reaper finds the plugin. It must be something with my .json file...
I added a work around that will hopefully work for you, as of release 1.2 there’s a load model button that will allow you to select the json file from a file select dialog. If that doesn’t work try the default ts9 model in the repo, and if the default works then it’s probably your json file, which I can take a look at if you’d like.
CPU flag added for easy way to force cpu training in PedalNetRT, closing issue
Installed the CUDA-less Pytorch,
... I'm getting
File "C:\Users\Chip\anaconda3\lib\site-packages\pytorch_lightning\trainer\distrib_parts.py", line 317, in sanitize_gpu_ids raise MisconfigurationException(f""" pytorch_lightning.utilities.exceptions.MisconfigurationException: You requested GPUs: [0] But your machine only has: []
I thought I would try it with the "no CUDA" pytorch lightning but I guess I was too optimistic?