Open gcp opened 3 years ago
Am also seeing this.
To be fair I'm also seeing this on Facebook's MADGRAD now, so I wonder if Adam/madgrad are just more likely to trigger this kind of divergence or if a bug slipped into the training data.
Basically one of the loss values NaN's and this causes the optimizer to instantly fail (I guess SGD just recovers if that happens).
Reducing my learning rate solved it.
I've had the same issue. Reducing the learning rate did help, but I'm at 1e-5 with default parameters and 1e-6 with madgrad still gave NaN on loss values. Curious if there's something else I can do.
I've just hit it too :(
I found my error. I had some training data with values way outside me expected range of 0-1 which I found by adding an assert in my dataloader.
I integrated ranger21 into https://github.com/glinscott/nnue-pytorch and exploring different parameters. I'm hitting this issue always after first step of training.
This is what I'm using:
optimizer = ranger21.Ranger21(train_params,
lr=8.75e-4, betas=(.9, 0.999), eps=1.0e-7,
using_gc=False, using_normgc=False,
weight_decay=0,
num_batches_per_epoch=int(self.epoch_size/self.batch_size), num_epochs=self.max_epochs,
warmdown_active=False, use_warmup=False,
use_adaptive_gradient_clipping=False,
softplus=False,
use_madgrad=True,
pnm_momentum_factor=0.0)
changing lr, eps, weight_decay, use_adaptive_gradient_clipping, use_warmup appears to have no effect. The NaN comes from the forward pass in the second step, so some weights become NaN. Adam and AdaBelief cores work fine.
Calling Ranger21 with mostly default parameters:
Training seems fine for half a day with decent progress on all loss metrics, but then halts: