An error occurs when I run the code:
Traceback (most recent call last):
File "basicsr/train.py", line 266, in
train_pipeline(root_path)
File "basicsr/train.py", line 211, in train_pipeline
model.optimize_parameters(current_iter)
File "/opt/data/private/wyh/UHDformer-main/basicsr/models/femasr_model.py", line 148, in optimize_parameters
l_g_total.mean().backward()
File "/root/anaconda3/envs/wdgan/lib/python3.6/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/root/anaconda3/envs/wdgan/lib/python3.6/site-packages/torch/autograd/init.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
How do I fix it?
An error occurs when I run the code: Traceback (most recent call last): File "basicsr/train.py", line 266, in
train_pipeline(root_path)
File "basicsr/train.py", line 211, in train_pipeline
model.optimize_parameters(current_iter)
File "/opt/data/private/wyh/UHDformer-main/basicsr/models/femasr_model.py", line 148, in optimize_parameters
l_g_total.mean().backward()
File "/root/anaconda3/envs/wdgan/lib/python3.6/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/root/anaconda3/envs/wdgan/lib/python3.6/site-packages/torch/autograd/init.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
How do I fix it?