danieltan07 / learning-to-reweight-examples

PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning
352 stars 60 forks source link

ValueError: can't optimize a non-leaf Variable #4

Open yule5426 opened 6 years ago

yule5426 commented 6 years ago

A problem occured when i was training the Baseline Model ValueError Traceback (most recent call last)

in () ----> 1 net, opt = build_model() 2 3 net_losses = [] 4 plot_step = 100 5 net_l = 0 in build_model() 6 torch.backends.cudnn.benchmark=True 7 ----> 8 opt = torch.optim.SGD(net.params(),lr=hyperparameters["lr"]) 9 10 return net, opt /opt/anaconda3/lib/python3.6/site-packages/torch/optim/sgd.py in __init__(self, params, lr, momentum, dampening, weight_decay, nesterov) 55 if nesterov and (momentum <= 0 or dampening != 0): 56 raise ValueError("Nesterov momentum requires a momentum and zero dampening") ---> 57 super(SGD, self).__init__(params, defaults) 58 59 def __setstate__(self, state): /opt/anaconda3/lib/python3.6/site-packages/torch/optim/optimizer.py in __init__(self, params, defaults) 37 38 for param_group in param_groups: ---> 39 self.add_param_group(param_group) 40 41 def __getstate__(self): /opt/anaconda3/lib/python3.6/site-packages/torch/optim/optimizer.py in add_param_group(self, param_group) 153 raise ValueError("optimizing a parameter that doesn't require gradients") 154 if not param.is_leaf: --> 155 raise ValueError("can't optimize a non-leaf Variable") 156 157 for name, default in self.defaults.items(): ValueError: can't optimize a non-leaf Variable
danieltan07 commented 5 years ago

@dy5426 sorry for the late reply. May I ask what version of pytorch you are using? This may be a versioning problem.