heykeetae / Self-Attention-GAN

Pytorch implementation of Self-Attention Generative Adversarial Networks (SAGAN)
2.51k stars 470 forks source link

I want to change 'imsize'. #12

Open ghost opened 6 years ago

ghost commented 6 years ago

I wanted to change 'imsize' that is paramater. So, I changed 'imsize' from 64 to 128 but I got the following error message. AttributeError: 'Discriminator' object has no attribute 'l4'

Could you tell me a solution? What should I change in codes?

ghost commented 6 years ago

Is l4 dependent on 'imsize==64' ?

JohnnyRisk commented 6 years ago

I have the same problem and it appears that l4 only gets added when you have imsize of 64. In order to add a larger imsize do we need to add more layers. For example l5 for 128 and l6 for 256?

c1a1o1 commented 6 years ago

I have the same problem! Attention = L4????

JohnnyRisk commented 6 years ago

You can use the following and replace the modules for attention, discriminator and generator. You will need to change the import statements and the logging of the generator. Otherwise these models work to create generators and discriminators that are dynamic in the size of the inputs.

class Self_Attn_dynamic(nn.Module): """ Self attention Layer"""

def __init__(self, in_dim, activation):
    super(Self_Attn_dynamic, self).__init__()
    self.chanel_in = in_dim
    self.activation = activation

    self.query_conv = nn.Conv2d(in_channels=in_dim, out_channels=in_dim // 8, kernel_size=1)
    self.key_conv = nn.Conv2d(in_channels=in_dim, out_channels=in_dim // 8, kernel_size=1)
    self.value_conv = nn.Conv2d(in_channels=in_dim, out_channels=in_dim, kernel_size=1)
    self.gamma = nn.Parameter(torch.zeros(1))

    self.softmax = nn.Softmax(dim=-1)  #

def forward(self, x):
    """
        inputs :
            x : input feature maps( B X C X W X H)
        returns :
            out : self attention value + input feature
            attention: B X N X N (N is Width*Height)
    """
    #print('attention size {}'.format(x.size()))
    m_batchsize, C, width, height = x.size()
    #print('query_conv size {}'.format(self.query_conv(x).size()))
    proj_query = self.query_conv(x).view(m_batchsize, -1, width * height).permute(0, 2, 1)  # B X CX(N)
    proj_key = self.key_conv(x).view(m_batchsize, -1, width * height)  # B X C x (*W*H)
    energy = torch.bmm(proj_query, proj_key)  # transpose check
    attention = self.softmax(energy)  # BX (N) X (N)
    proj_value = self.value_conv(x).view(m_batchsize, -1, width * height)  # B X C X N

    out = torch.bmm(proj_value, attention.permute(0, 2, 1))
    out = out.view(m_batchsize, C, width, height)

    out = self.gamma * out + x
    return out

class Generator_dynamic(nn.Module): """Generator."""

def __init__(self, batch_size, image_size=64, z_dim=100, conv_dim=64, attn_feat=[16, 32], upsample=False):
    super(Generator_dynamic, self).__init__()
    self.imsize = image_size
    layers = []

    n_layers = int(np.log2(self.imsize)) - 2
    mult = 8 #2 ** repeat_num  # 8
    assert mult * conv_dim > 3 * (2 ** n_layers), 'Need to add higher conv_dim, too many layers'

    curr_dim = conv_dim * mult

    # Initialize the first layer because it is different than the others.
    layers.append(SpectralNorm(nn.ConvTranspose2d(z_dim, curr_dim, 4)))
    layers.append(nn.BatchNorm2d(curr_dim))
    layers.append(nn.ReLU())

    for n in range(n_layers - 1):
        layers.append(SpectralNorm(nn.ConvTranspose2d(curr_dim, int(curr_dim / 2), 4, 2, 1)))
        layers.append(nn.BatchNorm2d(int(curr_dim / 2)))
        layers.append(nn.ReLU())

        #check the size of the feature space and add attention. (n+2) is used for indexing purposes
        if 2**(n+2) in attn_feat:
            layers.append(Self_Attn_dynamic(int(curr_dim / 2), 'relu'))
        curr_dim = int(curr_dim / 2)

    # append a final layer to change to 3 channels and add Tanh activation
    layers.append(nn.ConvTranspose2d(curr_dim, 3, 4, 2, 1))
    layers.append(nn.Tanh())

    self.output = nn.Sequential(*layers)

def forward(self, z):
    #TODO add dynamic layers to the class for inspection. if this is done we can output p1 and p2, right now they
    # are a placeholder so training loop can be the same.
    z = z.view(z.size(0), z.size(1), 1, 1)
    out = self.output(z)
    p1 = []
    p2 = []
    return out, p1, p2

class Discriminator_dynamic(nn.Module): """Discriminator, Auxiliary Classifier."""

def __init__(self, batch_size=64, image_size=64, conv_dim=64, attn_feat=[16, 32]):
    super(Discriminator_dynamic, self).__init__()
    self.imsize = image_size
    layers = []

    n_layers = int(np.log2(self.imsize)) - 2
    # Initialize the first layer because it is different than the others.
    layers.append(SpectralNorm(nn.Conv2d(3, conv_dim, 4, 2, 1)))
    layers.append(nn.LeakyReLU(0.1))

    curr_dim = conv_dim

    for n in range(n_layers - 1):
        layers.append(SpectralNorm(nn.Conv2d(curr_dim, curr_dim * 2, 4, 2, 1)))
        layers.append(nn.LeakyReLU(0.1))
        curr_dim *= 2
        if 2**(n+2) in attn_feat:
            layers.append(Self_Attn_dynamic(curr_dim, 'relu'))

    layers.append(nn.Conv2d(curr_dim, 1, 4))
    self.output = nn.Sequential(*layers)

def forward(self, x):
    out = self.output(x)
    p1 = []
    p2 = []
    return out.squeeze(), p1, p2
c1a1o1 commented 6 years ago

Thank you JohnnyRisk!

c1a1o1 commented 6 years ago

TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not tuple

JohnnyRisk commented 6 years ago

Please leave the whole output log.

c1a1o1 commented 6 years ago

class ResnetGenerator(nn.Module): def init(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'): assert(n_blocks >= 0) super(ResnetGenerator, self).init() self.input_nc = input_nc self.output_nc = output_nc self.ngf = ngf if type(norm_layer) == functools.partial: use_bias = norm_layer.func == nn.InstanceNorm2d else: use_bias = norm_layer == nn.InstanceNorm2d

    model = [nn.ReflectionPad2d(3),
             nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0,
                       bias=use_bias),
             norm_layer(ngf),
             nn.ReLU(True)]

    n_downsampling = 2
    for i in range(n_downsampling):
        mult = 2**i
        model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3,
                            stride=2, padding=1, bias=use_bias),
                  norm_layer(ngf * mult * 2),
                  nn.ReLU(True)]
        model += [Self_Attn(int(ngf * mult * 2), 'relu')]
    mult = 2**n_downsampling
    for i in range(n_blocks):
        model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]

    for i in range(n_downsampling):
        mult = 2**(n_downsampling - i)
        model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
                                     kernel_size=3, stride=2,
                                     padding=1, output_padding=1,
                                     bias=use_bias),
                  norm_layer(int(ngf * mult / 2)),
                  nn.ReLU(True)]
    model += [nn.ReflectionPad2d(3)]
    model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
    model += [nn.Tanh()]

    self.model = nn.Sequential(*model)

def forward(self, input):
    return self.model(input)
c1a1o1 commented 6 years ago

File "F:\pytorchgan\attentionpytorch-CycleGAN-and-pix2pix-master\models\cycle_gan_model.py", line 84, in forward self.fake_B = self.netG_A(self.real_A) File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in call result = self.forward(*input, kwargs) File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 112, in forward return self.module(*inputs[0], *kwargs[0]) File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in call result = self.forward(input, kwargs) File "F:\pytorchgan\attentionpytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 222, in forward return self.model(input) File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in call result = self.forward(*input, *kwargs) File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\nn\modules\container.py", line 91, in forward input = module(input) File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in call result = self.forward(input, **kwargs) File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward self.padding, self.dilation, self.groups) TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not tuple

c1a1o1 commented 6 years ago

I add model += [Self_Attn(int(ngf mult 2), 'relu')]

c1a1o1 commented 6 years ago

Self_Attn = your Self_Attn_dynamic(

JohnnyRisk commented 6 years ago

It seems you are feeding a tuple into a convolution input instead of a tensor. check your inputs to the line that is getting the input. also make sure that your Self_Attn_dynamic is only outputting the out variable. In the original implementation it outputs out, attention i believe. I editted this because I did not need it.

c1a1o1 commented 6 years ago

You are right ! Thank you very much!

c1a1o1 commented 6 years ago

@JohnnyRisk I try your code,But it seems to have no effect, I think we must use the attention parameter.

JohnnyRisk commented 6 years ago

Why do you think that? What is the problem you are getting? Attention is used in the model but, the output is not. Please look at the original training loop and code and observe what dr1, dr2, gf1, and gf2 are used for.

c1a1o1 commented 6 years ago

@JohnnyRisk After adding Self-Attention, the training effect is the same as without Self-Attention. thank you again for your help!

JohnnyRisk commented 6 years ago

I am happy to help but, perhaps you could provide some more details in terms of what the actual problem you are facing. When you say the training effect is the same, what do you mean? Do the images just not look as good? Are you experiencing mode collapse? Have you tested the self attention to see what it is learning? It would help to make if you could give some concrete details of your training and what you believe the problem is. I am currently using the exact code and it is working just fine.

c1a1o1 commented 6 years ago

@JohnnyRisk Sorry, my code error is currently seeing the effect. Thank you again for your help!

prash030 commented 5 years ago

Hi @JohnnyRisk , I got the same error (No attribute L4), thanks for addressing this issue. I have two questions based on your solution

  1. I see that there is no modifications in the self attention module except for the input arguments and the name, am I correct? Edit: I just noticed that even the arguments are the same, only the name is different in the line "super(Self_Attn.....". Does this mean that the current Self_Attn class need not be edited according to your suggestion?
  2. I have changed the generator and discriminator import statements as you suggested but how do I change the logging?

Please let me know the answers when you find time. Thanks in advance!