Open shwangdev opened 5 years ago
Thanks for informing. I guess I have tried different batch sizes but I'm not sure how large I have tried. Batch size depends on the memory size of GPU. Nevertheless, It's worth knowing.
I think you can get loss value which is written as cost in my original code.
cost = self.model.train_on_batch(batch[0], batch[1])
https://github.com/kwonmha/Improving-RNN-recommendation-model/blob/f63ba48ef45fc621d9ea613863950fce7488ef18/neural_networks/rnn_base.py#L217
And I can't imagine how you can get accuracy but if you can have loss and accuracy after performing each iteration, it seems easy to save weight file by adding simple condition.
if acc > 0.99 or loss < 0.04:
SAVE_MODEL
Related code is here : https://github.com/kwonmha/Improving-RNN-recommendation-model/blob/f63ba48ef45fc621d9ea613863950fce7488ef18/neural_networks/rnn_base.py#L260
Maybe modifying codes in that block would work.