Closed agunapal closed 3 years ago
The code has an automatic learning rate adjustment mechanism and early stop mechanism. Adjustment mechanism: The learning rate is reduced by 50 percent if the loss of 3 epochs in the test set is not decreased Early stop mechanism: The loss of 20 epochs in the test set does not drop, and the training stops. And keep the best model
# BaseModel.py
if self.conf["train"]["half_lr"]:
self.scheduler = ReduceLROnPlateau(
optimizer=self.optimizer, factor=self.conf["scheduler"]["factor"],
patience=self.conf["scheduler"]["patience"], verbose=self.conf["scheduler"]["verbose"]
)
if self.conf["train"]["early_stop"]:
self.early_stop = EarlyStopping(monitor="val_loss", patience=20, verbose=True)
You can change the corresponding values according to your needs. In my memory, training set losses of -23.5 or more are better
Thank you. I guess there is some issue with the prediction code? .Even after training for 1 epoch, when I run inference, I get the audio output to be the same as input. I would have expected the output to be worse than the input
Could you please show your code and results? Is the spectrogram the same?
Thanks for getting back. I just needed to train more. I am seeing a difference in the spectrogram. Will continue training
@agunapal can you show me the loss log, why my loss is around 12 in epoch 2?
Hello, Could you please share how many epochs did you train the model for.
I see that the training loss plateaus around -21 and then remains around there for many epochs.
Also, how do i interpret the loss value? and how do you know when the training is done.