greatlog / DAN

This is an official implementation of Unfolding the Alternating Optimization for Blind Super Resolution
231 stars 41 forks source link

您好,我在训练DANv2的时候遇见以下这个问题,想请问您如何解决? #35

Closed lct1997 closed 2 years ago

lct1997 commented 2 years ago

File "/home/lct/L/DAN/codes/config/DANv2/train.py", line 203, in main model = create_model(opt) # load pretrained model of SFTMD File "/home/lct/L/DAN/codes/config/DANv2/models/init.py", line 17, in create_model m = M(opt) File "/home/lct/L/DAN/codes/config/DANv2/models/blind_model.py", line 90, in init lr_scheduler.MultiStepLR_Restart( File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 27, in init super(MultiStepLR_Restart, self).init(optimizer, last_epoch) File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 78, in init self.step() File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 141, in step values = self.get_lr() File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 34, in get_lr return [ File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 35, in group["initial_lr"] weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for : 'NoneType' and 'int' (DAN) lct@251:~/L/DAN/codes/config/DANv2$ 训练DANv1时并没有报错,能正常训练,但容易Loss发散;训练DANv2时会出现以上的报错信息,我应该如何修改才能正常训练呢?万分感谢!

leeziqiang commented 2 years ago

File "/home/lct/L/DAN/codes/config/DANv2/train.py", line 203, in main model = create_model(opt) # load pretrained model of SFTMD File "/home/lct/L/DAN/codes/config/DANv2/models/init.py", line 17, in create_model m = M(opt) File "/home/lct/L/DAN/codes/config/DANv2/models/blind_model.py", line 90, in init lr_scheduler.MultiStepLR_Restart( File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 27, in init super(MultiStepLR_Restart, self).init(optimizer, last_epoch) File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 78, in init self.step() File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 141, in step values = self.get_lr() File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 34, in get_lr return [ File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 35, in group["initial_lr"] weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for : 'NoneType' and 'int' (DAN) lct@251:~/L/DAN/codes/config/DANv2$ 训练DANv1时并没有报错,能正常训练,但容易Loss发散;训练DANv2时会出现以上的报错信息,我应该如何修改才能正常训练呢?万分感谢! 你好,我也遇到了这个问题,请问你解决了吗

leeziqiang commented 2 years ago

File "/home/lct/L/DAN/codes/config/DANv2/train.py", line 203, in main model = create_model(opt) # load pretrained model of SFTMD File "/home/lct/L/DAN/codes/config/DANv2/models/init.py", line 17, in create_model m = M(opt) File "/home/lct/L/DAN/codes/config/DANv2/models/blind_model.py", line 90, in init lr_scheduler.MultiStepLR_Restart( File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 27, in init super(MultiStepLR_Restart, self).init(optimizer, last_epoch) File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 78, in init self.step() File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 141, in step values = self.get_lr() File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 34, in get_lr return [ File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 35, in group["initial_lr"] weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for : 'NoneType' and 'int' (DAN) lct@251:~/L/DAN/codes/config/DANv2$ 训练DANv1时并没有报错,能正常训练,但容易Loss发散;训练DANv2时会出现以上的报错信息,我应该如何修改才能正常训练呢?万分感谢! 你好,我也遇到了这个问题,请问你解决了吗