Closed mishushakov closed 3 years ago
dear Keith, see there is now a green mark on pull requests, saying "All checks have passed" that's the GitHub Action checking that the committer has formatted the code properly (black)
makes me feel i'm a good boy 🐶
@GuitarML hey and thanks for taking your time
i think there is some sort of cognitive dissonance going on pedalnet is solving a different problem than we are
thanks
also, could you explain to me what this construct is supposed to do? https://github.com/GuitarML/PedalNetRT/blob/c62e58080546534118ed244f93be1e7ad3234e63/model.py#L109-L126
this was not in pedalnet https://github.com/teddykoker/pedalnet/blob/6f72adb803b3d62cf51daa7a38df5e6c72bb29dd/model.py#L76-L85
i removed that and don't see any difference
Oh right, I see what you mean on the blank directories, the readme’s can stay, I just didn’t understand the purpose at first. Good catch on the try/except, I think that can be removed. It was a work around but I later figured out it was related to using an older version of PyTorch lightning, the model hparams weren’t loading properly when using predict or convert. If you can run all the scripts properly without that exception, feel free to remove it. (Make sure that’s not the error your getting on running the convert script).
in this commit:
model.py
plot.py
function, changed --show
parameter behaviour
if --show
is present will show the plots, otherwise not and one thing i'd like to know is why you would want to be able to change hparams when resuming training?
Good question, mainly for extending the max_epochs, so if it doesn’t get to a good loss then you can train longer and see if it improves.
alright, so far soo good!
changes in this commit
training defaults to all GPUs, unless --cpu
param is given
resume training if --resume
param is present
can be combined with --model
to pick the model, example: python train.py --model=ts9 --resume
notebooks are now stored in the /notebooks
folder
notes:
Oops I didn’t mean to click ready for review, when you are ready for me to review code let me know, I’m fairly new to collaborating on GitHub. The Colab training looks good, I think the same thing is happening as it did with me for TPUs, where the loss doesn’t go down as far as training locally. It’s still good, but for the default data I got it to go down to .0091 or so for 1500 epochs. Unsure why that is, but might be worth it for the speed and accessibility of using Colab. Thanks!
For some unknown reason .wav produced by predict.py
does not make any sound
i have synced changes
predict.py
works, but i can't hear any sound in the produced audioconvert.py
changed to export.py
Hey Keith, i had some time to work on the project again!
i changed the file structure to a much simpler one now each model has a directory and all the files are saved there instead of different directories this is particularly useful with Docker so instead of binding each directory we'll only need two - one for input data and one for the artefacts
here's how it looks like
looks a little lazy, but does the job the structure is also way easier to navigate if you have many models
in export.py
i've added --format
argument to specify export formats
it has no use currently, but it will once we add more formats
also i noticed that i cannot load your ts9 model, because it fails to unpickle i don't think its very critical as all the models i've trained so far load successfully
....now i'm going to train some and let you know if we're ready to go
Sounds good! I'm a fan of the reorganization, nice work.
Thanks for the feedback!
here's the colab i'm using https://colab.research.google.com/drive/1_82N7eJYPkYxtJvUDWUo_5rpE_-GXPX1
here the results
still i don't know what's wrong with predict.py
it gives no error but i cannot hear any sound
Hey guys! See you're having trouble with predict, the bug is due to how the wav file is currently being saved: https://github.com/GuitarML/PedalNetRT/blob/71ceec7ef84fc8c2db16a9cad658b54f6be90b7d/predict.py#L12
It's using 16bit instead of 32bit. Changing that fixed it for me, hope it helps you out!
@ivanpondal good catch, thanks
in this update
will look into #16 thanks
@ivanpondal and @mishushakov, thanks for the catches and fixes! Keep up the good work! This pull will be a big update and I'm looking forward to implementing it.
Prepare function now included with train.py
like in GuitarLSTM
you can train models with python train.py in.wav out.wav
prepare.py
can still be run manually (for people watching your videos)
i changed how models are loaded (again)
now instead of specifying name you should specify full path data.pickle will be resolved automatically depending on the directory
example
python test.py --model=models/pedalnet/pedalnet.ckpt
this is also to make it compatible with current documentation
one thing i noticed though is it started to show this num_channels
error again
not sure why it's back, but i'll reinstall of pytorch in colab to see if it changes something
so when i run train.py
, model gets Namespace
but when i run everything else, model gets a dict
now i need to use my brain muscle to figure out why it does this
ok this one is fixed, will make sure original pedalnet does not suffer too what a day!
This is a big one! A quick look over it looks good, I think your changes simplify things for the better, and I can tell you put a lot of thought into it. Thanks so much! I’m going run through a training session just to make sure I understand all the changes, then merge it in as soon as I can.
@mishushakov I think all the code is there, but when I follow the Readme instructions it overwrites the models/pedalnet/data.pickle file instead of putting it in the proper path. I think I understand what you're going for, is it required to give a --model argument to properly save the data.pickle? If so, let's add that to the Readme. I am using the 'refactor' branch.
right now i'm placing data.pickle
into the model's directory, because each model depends on it
the default model name is "pedalnet" (lives under models/pedalnet), unless you specify otherwise with --model
do you think it would be good idea to embed the contents of data.pickle
into the model.ckpt itself?
data.pickle
dependencies
I think I see my confusion now, I cut the training short, so it didn't get to trainer.save_checkpoint(args.model)
and create the model in the models folder. So when I supplied the .ckpt in the lightning_logs it couldn't find the data.pickle. After letting it finish training I could run the steps as stated in the Readme. Maybe just make an extra note under the "Train" section that the --model flag should be in the form "models/model_name/model_name.ckpt". Does that make sense?
Edit: I’ll go ahead and make the merge, then add that clarification line since it’s such a small change. Good work!
yeah, i supposed of that
for testing i just set max_epochs
to 0
i can check/update documentation tomorrow thanks for the feedback so far!
i see you've updated the comment ps. make sure when resolving conflict you accept my changes
Nice merge, congrats!
🎉🎉🎉
Changelog
Simplified gpu/tpu code training is on
CPUGPU unless specified otherwiseAdded
--model
option allows you to specify modelname, default is models/pedalnet/pedalnet.ckptPrepare function now included with
train.py
you can train models withtrain.py in.wav out.wav
prepare.py
can still be run manually (for people watching your videos)In plot function, changed
--show_plot
parameter behaviour if--show
is present it will show the plots, otherwise notRenamed files for ease of understanding
Changed folder structure note:
model
can be overwritten with--model
option (see above)/figures - plots, format: model_detail_signal_comparison_e2s_1.1695(?).png, model_diff_spectrogram.png, model_signal_comparison_e2s_1.1695(?).png/tests, tests, format: model_x_test.wav, model_y_pred.wav, model_y_test.wavUpdated requirements
Checklist
not testednot working as expectedfails with an errorOther
remove samples from repository?--show_plots
turns into--show-plots
?maybe rename convert to export?❤️