Open YeHuanjie opened 4 years ago
I also have this problem, how did you solve it.
this is a bug widely spreaded in GAN like networks. when you do upsampling, you should not use deconvolution/transposed/ any similar networks. You should use resize convolution/ Upsample networks. check this paper: https://distill.pub/2016/deconv-checkerboard/
some code for beginners:
class UpBlock(nn.Module):
def __init__(self, in_channels, out_channels):
super(UpBlock, self).__init__()
# Replace upconv with an interpolation followed by a convolution.
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear') # You can also try mode='bilinear'
self.conv = nn.Conv2d(in_channels, in_channels // 2, kernel_size=3, padding=1)
self.conv_block = ConvBlock(in_channels // 2 + in_channels // 2, out_channels) # Adjust for the concatenated channels
def forward(self, x, skip):
x = self.upsample(x)
x = self.conv(x)
x = torch.cat([x, skip], dim=1)
return self.conv_block(x)
my test images have Artifacts like this, what's the problem and how to sovle it? thx anyway!![1](https://user-images.githubusercontent.com/41501835/82526380-7596ad80-9b66-11ea-99f3-d5120b1d057a.png)