MC-E / Deep-Generalized-Unfolding-Networks-for-Image-Restoration

Accepted by CVPR 2022
123 stars 26 forks source link

About Loss #6

Open keys-zlc opened 1 year ago

keys-zlc commented 1 year ago

First,it is a real awesome work! When I run the train.py in the file named Deblurring, I got an error:

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. It seems that you use the numpy.sum () to compute the loss after CharbonnierLoss() and EdgeLoss(). As far as I know, if we use the loss.backward(), we should use the tools provided by pytorch(I don't know if it is true). The code you provided is below: loss_char = np.sum([criterion_char(restored[j],target) for j in range(len(restored))]) loss_edge = np.sum([criterion_edge(restored[j], target) for j in range(len(restored))]) loss = (loss_char) + (0.05*loss_edge) When I use the below code to replace the above, I found the psnr will not change during training . loss_char = torch.tensor([criterion_char(restored[j], target) for j in range(len(restored))],requires_grad=True).sum() loss_edge = torch.tensor([criterion_edge(restored[j], target) for j in range(len(restored))],requires_grad=True).sum() loss = (loss_char) + (0.05*loss_edge) Could you help me to train it correctly? Looking forward to your reply!

MC-E commented 1 year ago

Thanks for your attention. This is a version issue of numpy. Please update numpy.

sunwarm2001 commented 1 year ago

Have you solved your problem? I have a similar problem, when I migrate the original code loss to another framework, it return a "numpy.float64" loss, which cannot be used to calculate backward() even if I convert it to a tensor. And got an error:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

keys-zlc commented 1 year ago

Sorry, it's too far from now, I can't remember it

---Original--- From: @.> Date: Sun, Jul 9, 2023 18:03 PM To: @.>; Cc: @.**@.>; Subject: Re: [MC-E/Deep-Generalized-Unfolding-Networks-for-Image-Restoration]About Loss (Issue #6)

Have you solved your problem? I have a similar problem, when I migrate the original code loss to another framework, it return a "numpy.float64" loss, which cannot be used to calculate backward() even if I convert it to a tensor. And got an error:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

GaoQF1016 commented 1 year ago

Have you solved your problem? I have a similar problem, when I migrate the original code loss to another framework, it return a "numpy.float64" loss, which cannot be used to calculate backward() even if I convert it to a tensor. And got an error:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I have tried to reduce the numpy version and the issue has been resolved.

sunwarm2001 commented 1 year ago

Have you solved your problem? I have a similar problem, when I migrate the original code loss to another framework, it return a "numpy.float64" loss, which cannot be used to calculate backward() even if I convert it to a tensor. And got an error:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I have tried to reduce the numpy version and the issue has been resolved.

Could you tell me about your numpy version? My environment must require my numpy>=1.21.

yanghan0617 commented 9 months ago

你解决了你的问题吗?我有一个类似的问题,当我将原始代码丢失迁移到另一个框架时,它会返回“numpy.float64”丢失,即使我将其转换为张量,也无法用于计算 backward()。并得到一个错误:

RuntimeError:张量的元素 0 不需要 grad,也没有grad_fn

我试图减少numpy版本,问题已经解决。

你能告诉我你的numpy版本吗?我的环境必须需要我的numpy>=1.21。 请问你的问题解决了吗