greatlog / DAN

This is an official implementation of Unfolding the Alternating Optimization for Blind Super Resolution
231 stars 41 forks source link

group["initial_lr"]: unsupported operand type(s) for *: 'NoneType' and 'int' #44

Open Amer-Alhamvi opened 2 years ago

Amer-Alhamvi commented 2 years ago

Traceback (most recent call last): File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 349, in <module> main() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 207, in main model = create_model(opt) # load pretrained model of SFTMD File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\__init__.py", line 17, in create_model m = M(opt) File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\blind_model.py", line 90, in __init__ lr_scheduler.MultiStepLR_Restart( File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 27, in __init__ super(MultiStepLR_Restart, self).__init__(optimizer, last_epoch) File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 77, in __init__ self.step() File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 154, in step values = self.get_lr() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 34, in get_lr return [ File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 35, in <listcomp> group["initial_lr"] * weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'

PyTorch 1.11 and 1.5. tried both.

jiangmengyu18 commented 1 year ago

The training .yml file don't set the learning rate of the estimater and you need add it into the .yml file, which is the reason of the 'NoneType'.

Lincoln20030413 commented 1 year ago

Then how to set?

Lincoln20030413 commented 1 year ago

Traceback (most recent call last): File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 349, in <module> main() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 207, in main model = create_model(opt) # load pretrained model of SFTMD File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\__init__.py", line 17, in create_model m = M(opt) File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\blind_model.py", line 90, in __init__ lr_scheduler.MultiStepLR_Restart( File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 27, in __init__ super(MultiStepLR_Restart, self).__init__(optimizer, last_epoch) File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 77, in __init__ self.step() File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 154, in step values = self.get_lr() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 34, in get_lr return [ File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 35, in <listcomp> group["initial_lr"] * weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'

PyTorch 1.11 and 1.5. tried both.

Hello, do you solve it? Can you help me?

Lincoln20030413 commented 1 year ago

The training .yml file don't set the learning rate of the estimater and you need add it into the .yml file, which is the reason of the 'NoneType'.

Sorry, but it can't be solved

eze1376 commented 9 months ago

Hello, You must assign value for "lr_E" in train section of .yml file (e.g. lr_E: !!float 4e-4)