manuelfritsche / real-world-sr

[ICCVW 2019] PyTorch implementation of DSGAN and ESRGAN-FS from the paper "Frequency Separation for Real-World Super-Resolution". This code was the winning solution of the AIM challenge on Real-World Super-Resolution at ICCV 2019
MIT License
162 stars 37 forks source link

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! #18

Closed Flyooofly closed 3 years ago

Flyooofly commented 4 years ago

When I want to run the DSGAN/train.py with python train.py have the top question,who can help me solve this problem,thank you. And I have try the ways in some blogs said ,set the relu and leakyrelu the inplace is False,but the problem doesn't solved~ This is the code of model: `from torch import nn import torch

class Generator(nn.Module): def init(self, n_res_blocks=8): super(Generator, self).init() self.block_input = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, padding=1), nn.PReLU() ) self.resblocks = nn.ModuleList([ResidualBlock(64) for in range(n_res_blocks)]) self.block_output = nn.Conv2d(64, 3, kernel_size=3, padding=1)

def forward(self, x):
    block = self.block_input(x)
    for res_block in self.res_blocks:
        block = res_block(block)
    block = self.block_output(block)
    return torch.sigmoid(block)

class Discriminator(nn.Module): def init(self, recursions=1, stride=1, kernel_size=5, gaussian=False, wgan=False, highpass=True): super(Discriminator, self).init() if highpass: self.filter = FilterHigh(recursions=recursions, stride=stride, kernel_size=kernel_size, include_pad=False, gaussian=gaussian) else: self.filter = None self.net = DiscriminatorBasic(n_input_channels=3) self.wgan = wgan

def forward(self, x, y=None):
    if self.filter is not None:
        x = self.filter(x)
    x = self.net(x)
    if y is not None:
        x -= self.net(self.filter(y)).mean(0, keepdim=True)
    if not self.wgan:
        x = torch.sigmoid(x)
    return x

class DiscriminatorBasic(nn.Module): def init(self, n_input_channels=3): super(DiscriminatorBasic, self).init() self.net = nn.Sequential( nn.Conv2d(n_input_channels, 64, kernel_size=5, padding=2), nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(64, 128, kernel_size=5, padding=2),
        nn.BatchNorm2d(128),
        nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(128, 256, kernel_size=5, padding=2),
        nn.BatchNorm2d(256),
        nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(256, 1, kernel_size=1)
    )

def forward(self, x):
    return self.net(x)

class ResidualBlock(nn.Module): def init(self, channels): super(ResidualBlock, self).init() self.conv1 = nn.Conv2d(channels, channels, kernel_size=3, padding=1) self.prelu = nn.PReLU() self.conv2 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)

def forward(self, x):
    residual = self.conv1(x)
    residual = self.prelu(residual)
    residual = self.conv2(residual)
    return x + residual

class GaussianFilter(nn.Module): def init(self, kernel_size=5, stride=1, padding=4): super(GaussianFilter, self).init()

initialize guassian kernel

    mean = (kernel_size - 1) / 2.0
    variance = (kernel_size / 6.0) ** 2.0
    # Create a x, y coordinate grid of shape (kernel_size, kernel_size, 2)
    x_coord = torch.arange(kernel_size)
    x_grid = x_coord.repeat(kernel_size).view(kernel_size, kernel_size)
    y_grid = x_grid.t()
    xy_grid = torch.stack([x_grid, y_grid], dim=-1).float()

    # Calculate the 2-dimensional gaussian kernel
    gaussian_kernel = torch.exp(-torch.sum((xy_grid - mean) ** 2., dim=-1) / (2 * variance))

    # Make sure sum of values in gaussian kernel equals 1.
    gaussian_kernel = gaussian_kernel / torch.sum(gaussian_kernel)

    # Reshape to 2d depthwise convolutional weight
    gaussian_kernel = gaussian_kernel.view(1, 1, kernel_size, kernel_size)
    gaussian_kernel = gaussian_kernel.repeat(3, 1, 1, 1)

    # create gaussian filter as convolutional layer
    self.gaussian_filter = nn.Conv2d(3, 3, kernel_size, stride=stride, padding=padding, groups=3, bias=False)
    self.gaussian_filter.weight.data = gaussian_kernel
    self.gaussian_filter.weight.requires_grad = False

def forward(self, x):
    return self.gaussian_filter(x)

class FilterLow(nn.Module): def init(self, recursions=1, kernel_size=5, stride=1, padding=True, include_pad=True, gaussian=False): super(FilterLow, self).init() if padding: pad = int((kernel_size - 1) / 2) else: pad = 0 if gaussian: self.filter = GaussianFilter(kernel_size=kernel_size, stride=stride, padding=pad) else: self.filter = nn.AvgPool2d(kernel_size=kernel_size, stride=stride, padding=pad, count_include_pad=include_pad) self.recursions = recursions

def forward(self, img):
    for i in range(self.recursions):
        img = self.filter(img)
    return img

class FilterHigh(nn.Module): def init(self, recursions=1, kernel_size=5, stride=1, include_pad=True, normalize=True, gaussian=False): super(FilterHigh, self).init() self.filter_low = FilterLow(recursions=1, kernel_size=kernel_size, stride=stride, include_pad=include_pad, gaussian=gaussian) self.recursions = recursions self.normalize = normalize

def forward(self, img):
    if self.recursions > 1:
        for i in range(self.recursions - 1):
            img = self.filter_low(img)
    img = img - self.filter_low(img)
    if self.normalize:
        return 0.5 + img * 0.5
    else:
        return img

`

image

jfun9494 commented 3 years ago

I remember solving this by using torch==1.1.0

hcleung3325 commented 3 years ago

When I want to run the DSGAN/train.py with python train.py have the top question,who can help me solve this problem,thank you. And I have try the ways in some blogs said ,set the relu and leakyrelu the inplace is False,but the problem doesn't solved~ This is the code of model: `from torch import nn import torch

class Generator(nn.Module): def init(self, n_res_blocks=8): super(Generator, self).init() self.block_input = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, padding=1), nn.PReLU() ) self.resblocks = nn.ModuleList([ResidualBlock(64) for in range(n_res_blocks)]) self.block_output = nn.Conv2d(64, 3, kernel_size=3, padding=1)

def forward(self, x):
    block = self.block_input(x)
    for res_block in self.res_blocks:
        block = res_block(block)
    block = self.block_output(block)
    return torch.sigmoid(block)

class Discriminator(nn.Module): def init(self, recursions=1, stride=1, kernel_size=5, gaussian=False, wgan=False, highpass=True): super(Discriminator, self).init() if highpass: self.filter = FilterHigh(recursions=recursions, stride=stride, kernel_size=kernel_size, include_pad=False, gaussian=gaussian) else: self.filter = None self.net = DiscriminatorBasic(n_input_channels=3) self.wgan = wgan

def forward(self, x, y=None):
    if self.filter is not None:
        x = self.filter(x)
    x = self.net(x)
    if y is not None:
        x -= self.net(self.filter(y)).mean(0, keepdim=True)
    if not self.wgan:
        x = torch.sigmoid(x)
    return x

class DiscriminatorBasic(nn.Module): def init(self, n_input_channels=3): super(DiscriminatorBasic, self).init() self.net = nn.Sequential( nn.Conv2d(n_input_channels, 64, kernel_size=5, padding=2), nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(64, 128, kernel_size=5, padding=2),
        nn.BatchNorm2d(128),
        nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(128, 256, kernel_size=5, padding=2),
        nn.BatchNorm2d(256),
        nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(256, 1, kernel_size=1)
    )

def forward(self, x):
    return self.net(x)

class ResidualBlock(nn.Module): def init(self, channels): super(ResidualBlock, self).init() self.conv1 = nn.Conv2d(channels, channels, kernel_size=3, padding=1) self.prelu = nn.PReLU() self.conv2 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)

def forward(self, x):
    residual = self.conv1(x)
    residual = self.prelu(residual)
    residual = self.conv2(residual)
    return x + residual

class GaussianFilter(nn.Module): def init(self, kernel_size=5, stride=1, padding=4): super(GaussianFilter, self).init()

initialize guassian kernel

mean = (kernel_size - 1) / 2.0 variance = (kernel_size / 6.0) ** 2.0

Create a x, y coordinate grid of shape (kernel_size, kernel_size, 2)

x_coord = torch.arange(kernel_size) x_grid = x_coord.repeat(kernel_size).view(kernel_size, kernel_size) y_grid = x_grid.t() xy_grid = torch.stack([x_grid, y_grid], dim=-1).float()

    # Calculate the 2-dimensional gaussian kernel
    gaussian_kernel = torch.exp(-torch.sum((xy_grid - mean) ** 2., dim=-1) / (2 * variance))

    # Make sure sum of values in gaussian kernel equals 1.
    gaussian_kernel = gaussian_kernel / torch.sum(gaussian_kernel)

    # Reshape to 2d depthwise convolutional weight
    gaussian_kernel = gaussian_kernel.view(1, 1, kernel_size, kernel_size)
    gaussian_kernel = gaussian_kernel.repeat(3, 1, 1, 1)

    # create gaussian filter as convolutional layer
    self.gaussian_filter = nn.Conv2d(3, 3, kernel_size, stride=stride, padding=padding, groups=3, bias=False)
    self.gaussian_filter.weight.data = gaussian_kernel
    self.gaussian_filter.weight.requires_grad = False

def forward(self, x):
    return self.gaussian_filter(x)

class FilterLow(nn.Module): def init(self, recursions=1, kernel_size=5, stride=1, padding=True, include_pad=True, gaussian=False): super(FilterLow, self).init() if padding: pad = int((kernel_size - 1) / 2) else: pad = 0 if gaussian: self.filter = GaussianFilter(kernel_size=kernel_size, stride=stride, padding=pad) else: self.filter = nn.AvgPool2d(kernel_size=kernel_size, stride=stride, padding=pad, count_include_pad=include_pad) self.recursions = recursions

def forward(self, img):
    for i in range(self.recursions):
        img = self.filter(img)
    return img

class FilterHigh(nn.Module): def init(self, recursions=1, kernel_size=5, stride=1, include_pad=True, normalize=True, gaussian=False): super(FilterHigh, self).init() self.filter_low = FilterLow(recursions=1, kernel_size=kernel_size, stride=stride, include_pad=include_pad, gaussian=gaussian) self.recursions = recursions self.normalize = normalize

def forward(self, img):
    if self.recursions > 1:
        for i in range(self.recursions - 1):
            img = self.filter_low(img)
    img = img - self.filter_low(img)
    if self.normalize:
        return 0.5 + img * 0.5
    else:
        return img

`

image I have the same problem too. Anyone can help?

hcleung3325 commented 3 years ago

I remember solving this by using torch==1.1.0

Any other version dependence too? what is the cuda version?

jfun9494 commented 3 years ago

I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think

hcleung3325 commented 3 years ago

I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think

Thanks. The program can run now. However, there is a problem at epoch 30. File "train.py", line 260, in val_images = torch.chunk(val_images, val_images.size(0) // (n_val_images * 5)) RuntimeError: chunk expects chunks to be greater than 0, got: 0

Flyooofly commented 3 years ago

Oh,this problem had been solved.I remeber my method is adjust the version of pytorch. I remember I shift down the version .Before is 1.6.0 and I adjust to 1.4.0 or maybe lower that I don't remember clearly,sorry.

hcleung3325 commented 3 years ago

I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think

I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much.

Flyooofly commented 3 years ago

You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader. 

---Original--- From: "hcleung3325"<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18)

I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think

I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

hcleung3325 commented 3 years ago

You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader.  ---Original--- From: "hcleung3325"<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

Thanks Flyooofly. Is that the crop HR image is not necessary the same region as the generated LR images for the discriminator? If so, will the discriminator has difficult to learn the mapping since the crop HR image and the generated LR image are not in the same region at all? Thanks again.

Flyooofly commented 3 years ago

Crop HR and the unsatisfied LR after resizing must not belong to the same domain. I think it is necessary to use generators and discriminators to transform this LR from its domain to the HR domain. I think this is the meaning of this GAN. Let this dissatisfied LR learn its "should learn" characteristics. Of course this is just my understanding.

------------------ 原始邮件 ------------------ 发件人: "ManuelFritsche/real-world-sr" <notifications@github.com>; 发送时间: 2020年12月24日(星期四) 上午9:26 收件人: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; 抄送: "2207326681"<2207326681@QQ.COM>;"State change"<state_change@noreply.github.com>; 主题: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18)

You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader.  … ---Original--- From: "hcleung3325"<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

Thanks Flyooofly. Is that the crop HR image is not necessary the same region as the generated LR images for the discriminator? If so, will the discriminator has difficult to learn the mapping since the crop HR image and the generated LR image are not in the same region at all? Thanks again.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

hcleung3325 commented 3 years ago

Crop HR and the unsatisfied LR after resizing must not belong to the same domain. I think it is necessary to use generators and discriminators to transform this LR from its domain to the HR domain. I think this is the meaning of this GAN. Let this dissatisfied LR learn its "should learn" characteristics. Of course this is just my understanding. ------------------ 原始邮件 ------------------ 发件人: "ManuelFritsche/real-world-sr" <notifications@github.com>; 发送时间: 2020年12月24日(星期四) 上午9:26 收件人: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; 抄送: "2207326681"<2207326681@QQ.COM>;"State change"<state_change@noreply.github.com>; 主题: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader.  … ---Original--- From: "hcleung3325"<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Thanks Flyooofly. Is that the crop HR image is not necessary the same region as the generated LR images for the discriminator? If so, will the discriminator has difficult to learn the mapping since the crop HR image and the generated LR image are not in the same region at all? Thanks again. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

Thanks for reply. May I know that the "unsatisfied LR" means the generated LR?

Flyooofly commented 3 years ago

Oh,I don't say clearly.The unstastifed LR  is  resized HR that after the biscuits down sampling

---Original--- From: "hcleung3325"<notifications@github.com> Date: Thu, Dec 24, 2020 10:14 AM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18)

Crop HR and the unsatisfied LR after resizing must not belong to the same domain. I think it is necessary to use generators and discriminators to transform this LR from its domain to the HR domain. I think this is the meaning of this GAN. Let this dissatisfied LR learn its "should learn" characteristics. Of course this is just my understanding. … ------------------ 原始邮件 ------------------ 发件人: "ManuelFritsche/real-world-sr" <notifications@github.com>; 发送时间: 2020年12月24日(星期四) 上午9:26 收件人: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; 抄送: "2207326681"<2207326681@QQ.COM>;"State change"<state_change@noreply.github.com>; 主题: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader.  … ---Original--- From: "hcleung3325"<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Thanks Flyooofly. Is that the crop HR image is not necessary the same region as the generated LR images for the discriminator? If so, will the discriminator has difficult to learn the mapping since the crop HR image and the generated LR image are not in the same region at all? Thanks again. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

Thanks for reply. May I know that the "unsatisfied LR" means the generated LR?

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

hcleung3325 commented 3 years ago

Oh,I don't say clearly.The unstastifed LR  is  resized HR that after the biscuits down sampling ---Original--- From: "hcleung3325"<notifications@github.com> Date: Thu, Dec 24, 2020 10:14 AM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) Crop HR and the unsatisfied LR after resizing must not belong to the same domain. I think it is necessary to use generators and discriminators to transform this LR from its domain to the HR domain. I think this is the meaning of this GAN. Let this dissatisfied LR learn its "should learn" characteristics. Of course this is just my understanding. … ------------------ 原始邮件 ------------------ 发件人: "ManuelFritsche/real-world-sr" <notifications@github.com>; 发送时间: 2020年12月24日(星期四) 上午9:26 收件人: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; 抄送: "2207326681"<2207326681@QQ.COM>;"State change"<state_change@noreply.github.com>; 主题: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader.  … ---Original--- From: "hcleung3325"<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Thanks Flyooofly. Is that the crop HR image is not necessary the same region as the generated LR images for the discriminator? If so, will the discriminator has difficult to learn the mapping since the crop HR image and the generated LR image are not in the same region at all? Thanks again. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Thanks for reply. May I know that the "unsatisfied LR" means the generated LR? — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

Thanks. I am a bit confusing. Referring to Fig3 in paper, the "unsatisfied LR" means Xd, the input (Z) means the input z for the discriminator?

Flyooofly commented 3 years ago

yes.   Z and generator results as the discriminator's input. Z is the gt image

---Original--- From: "hcleung3325"<notifications@github.com> Date: Thu, Dec 24, 2020 10:33 AM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18)

Oh,I don't say clearly.The unstastifed LR  is  resized HR that after the biscuits down sampling … ---Original--- From: "hcleung3325"<notifications@github.com> Date: Thu, Dec 24, 2020 10:14 AM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) Crop HR and the unsatisfied LR after resizing must not belong to the same domain. I think it is necessary to use generators and discriminators to transform this LR from its domain to the HR domain. I think this is the meaning of this GAN. Let this dissatisfied LR learn its "should learn" characteristics. Of course this is just my understanding. … ------------------ 原始邮件 ------------------ 发件人: "ManuelFritsche/real-world-sr" <notifications@github.com>; 发送时间: 2020年12月24日(星期四) 上午9:26 收件人: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; 抄送: "2207326681"<2207326681@QQ.COM>;"State change"<state_change@noreply.github.com>; 主题: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader.  … ---Original--- From: "hcleung3325"<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: "ManuelFritsche/real-world-sr"<real-world-sr@noreply.github.com>; Cc: "State change"<state_change@noreply.github.com>;"JackoooooR"<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think I am trying to train a generator using the provided code with Div2k dataset. However, I don't know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Thanks Flyooofly. Is that the crop HR image is not necessary the same region as the generated LR images for the discriminator? If so, will the discriminator has difficult to learn the mapping since the crop HR image and the generated LR image are not in the same region at all? Thanks again. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Thanks for reply. May I know that the "unsatisfied LR" means the generated LR? — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

Thanks. I am a bit confusing. Referring to Fig3 in paper, the "unsatisfied LR" means Xd, the input (Z) means the input z for the discriminator?

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

canornot commented 2 years ago

Oh,this problem had been solved.I remeber my method is adjust the version of pytorch. I remember I shift down the version .Before is 1.6.0 and I adjust to 1.4.0 or maybe lower that I don't remember clearly,sorry.

Not necessary to reinstall pytorch older version. Placing optimizer_d.step() after g_loss.backward() and before optimizer_g.step() can simply solve the problem. Since fake_tex is involved in calculating discriminator loss, as well as in calculating g_loss. optimizer_d.step() would modify the property of fake_tex, resulting in corresponding error.