huawei-noah / Pretrained-IPT

Apache License 2.0
423 stars 63 forks source link

Inappropriate implementation of Multi-head in IPT? (which only works on SR) #55

Open hyunW3 opened 11 months ago

hyunW3 commented 11 months ago

Hello, I'm interested in your work and try some simple things based on your work. While i'm inspecting your code, I found some problem. I want to know whether it is real problem or misunderstanding

The passed argument to model in test time is degraded image and idx_scale as shown below

https://github.com/huawei-noah/Pretrained-IPT/blob/675540ed09169709e3603e9872fd13d238cbbe27/trainer.py#L42

I thought "idx_scale" stands for task id, which denote "i" in $f_H = H^i(x)$ you mentioned in the paper 3.1 Head.

image

However, I found that "idx_scale" only varies depending on scale level in SR(Super Resolution), and "idx_scale" is set at 0 on denoising and deraining task.

Also, the number of heads is decided with "args.scale" list, not number of tasks. https://github.com/huawei-noah/Pretrained-IPT/blob/675540ed09169709e3603e9872fd13d238cbbe27/model/ipt.py#L33-L39

I want to make sure this is right to improve your code (or repo) Thank you reading this issue and looking forward your reponse.

JGyoung-UCAS commented 9 months ago

Hello, I'm interested in your work and try some simple things based on your work. While i'm inspecting your code, I found some problem. I want to know whether it is real problem or misunderstanding

The passed argument to model in test time is degraded image and idx_scale as shown below

https://github.com/huawei-noah/Pretrained-IPT/blob/675540ed09169709e3603e9872fd13d238cbbe27/trainer.py#L42

I thought "idx_scale" stands for task id, which denote "i" in fH=Hi(x) you mentioned in the paper 3.1 Head.

image

However, I found that "idx_scale" only varies depending on scale level in SR(Super Resolution), and "idx_scale" is set at 0 on denoising and deraining task.

Also, the number of heads is decided with "args.scale" list, not number of tasks.

https://github.com/huawei-noah/Pretrained-IPT/blob/675540ed09169709e3603e9872fd13d238cbbe27/model/ipt.py#L33-L39

I want to make sure this is right to improve your code (or repo) Thank you reading this issue and looking forward your reponse.

I met with the same problem. Did you solve it?

HantingChen commented 9 months ago

Sorry for the unclear code. This implement is right since we only release the test code. Your implement is a right way to modify it into a training code.

hyunW3 commented 9 months ago

@JGyoung-UCAS Actually, i'm not working with this code right now.. However, sharing my thoughts with you, I think the implementation of the "self.head" should be concerning with "task_id", not "args.scale" to separate the head for each task. This might requires separation of "args.scale" variable into "task" and "scale in SR task"

@HantingChen Thank you for your response, My question is whether the multi-head is well implemented regardless of training or test, since "idx_scale 0" head covers two task (denoising & deraining) with your implementation.

JGyoung-UCAS commented 9 months ago

@JGyoung-UCAS Actually, i'm not working with this code right now.. However, sharing my thoughts with you, I think the implementation of the "self.head" should be concerning with "task_id", not "args.scale" to separate the head for each task. This might requires separation of "args.scale" variable into "task" and "scale in SR task"

@HantingChen Thank you for your response, My question is whether the multi-head is well implemented regardless of training or test, since "idx_scale 0" head covers two task (denoising & deraining) with your implementation.

Thank you for the reminder. This code is very incomplete. Maybe you can check this code in the version of Minspore: https://gitee.com/mindspore/models/tree/master/research/cv/IPT. It seems to be more clear.