daitao / SAN

Second-order Attention Network for Single Image Super-resolution (CVPR-2019)
550 stars 106 forks source link

There are some difference between code and paper #4

Open Zysty opened 5 years ago

Zysty commented 5 years ago

Hi, thanks for your work.

  1. I can not find any information about 'LSRAG' in the code of ‘san.py’.

  2. In the code, I find the 'SOCA' is not at the tail of NLRG and there is still one Conv layer following 'SOCA'. Additionally, san consists of several NLRGs(n_resgroups). In the paper, SAN just has one NLRG,which consists of several LSRAG. So I think the 'NLRG' in code is actually the 'LSRAG' in paper. Is it right? And if it is right, why is the SOCA followed by a Conv layer? In paper,the SOCA is at the tail of LSRAG.

I want to know the reason about the difference.Looking forward to your reply.

daitao commented 5 years ago
  1. the LSRAG consists of some residual block ended with a SOCA. The code is in" Nonlocal Enhanced Residual Group (NLRG)"
  2. You are right. The 'NLRG' in code is actually the 'LSRAG' in paper. I will correct the mistakes soon. I will check the code soon. Thank you.
Zysty commented 5 years ago

So the 'NLRG' in code is actually the 'LSRAG' in paper.But I find that 'LSRAG' ('NLRG' in code) is ended with a convolution layer in code( 'self.conv_last(x)') .

Zysty commented 5 years ago

Hi, I find another issue. In the code, The batch_size default setting in option.py is 16 and there is no related settings in TrainSAN_scripts.sh. In the paper, 'In each min-batch, 8 LR color patches with size 48 × 48 are provided as inputs. ' Does it means the batch_size is set to 8?

Awenjie10 commented 4 years ago

Hi, I find another issue. In the code, The batch_size default setting in option.py is 16 and there is no related settings in TrainSAN_scripts.sh. In the paper, 'In each min-batch, 8 LR color patches with size 48 × 48 are provided as inputs. ' Does it means the batch_size is set to 8?

when i use batch_size=16 for training on 1080ti out of memory.

lookthatdog commented 4 years ago

在论文中 batch_size=32,但是设置的是16,按照代码设置的来跑Set5 x2的结果是37.713,比论文中的小很多

lookthatdog commented 4 years ago

Hi, I find another issue. In the code, The batch_size default setting in option.py is 16 and there is no related settings in TrainSAN_scripts.sh. In the paper, 'In each min-batch, 8 LR color patches with size 48 × 48 are provided as inputs. ' Does it means the batch_size is set to 8?

when i use batch_size=16 for training on 1080ti out of memory.

what's your running result?