Closed 980202006 closed 3 years ago
It depends. Usually train-loss < 0.3
when setting the Softmax temperature parameter, tau=0.05 (see 640_lamb.yaml).
With this setup, validation-loss (red)
and train-loss (blue)
until the 100th epoch:
On each epoch end, we perform a mini-search-test
. Accuracy@1s
and @3s
would be a useful validation of the model. As in the figure below, the accuracy is about 82.x@1s
or 94.x@3s
.
The learning curve is from the pre-trained model in #10 .
thank you!
The loss
and val_acc
are updated. Please see https://github.com/mimbres/neural-audio-fp/issues/26#issuecomment-1122516028
how much the loss when the model convergence