Open dar-tau opened 3 years ago
Hey! https://github.com/lucidrains/reformer-pytorch/blob/2cbc36bb280c0a6de46d838baad11a533a015fc3/pretraining/self-supervised.py#L327
you are dividing the eval_loss & perplexity each time you pass through the loop
Cheers, Guy :)
Hey! https://github.com/lucidrains/reformer-pytorch/blob/2cbc36bb280c0a6de46d838baad11a533a015fc3/pretraining/self-supervised.py#L327
you are dividing the eval_loss & perplexity each time you pass through the loop
Cheers, Guy :)