autograd_hacks.add_hooks(model)
all_params = []
for i in range(20):
epoch_params = []
train_loss = training_function(model, data, lr)
autograd_hacks.compute_grad1(model)
for name, params in model.named_parameters():
sample_grads = params.grad1.clone().cpu().detach().numpy()
epoch_params.append(sample_grads)
all_params.append(epoch_params)
autograd_hacks.disable_hooks(model)
all_params should contain different values as the gradients are changing every epoch, but it outputs always the same array. I tried using remove_hooks and clear_backprop but either it gave me errors or it does nothing. The training function has the usual loss, step, etc. I'd imagine the solution to this is easy. If it is not, I can write a minimal reproducible example.
In a nutshell, my code looks like this:
all_params
should contain different values as the gradients are changing every epoch, but it outputs always the same array. I tried usingremove_hooks
andclear_backprop
but either it gave me errors or it does nothing. The training function has the usual loss, step, etc. I'd imagine the solution to this is easy. If it is not, I can write a minimal reproducible example.