tensorlayer / SRGAN

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
https://github.com/tensorlayer/tensorlayerx
3.28k stars 808 forks source link

How just enhance an image, without increasing the resolution? #83

Open ontheway16 opened 6 years ago

ontheway16 commented 6 years ago

Hello, Is there a way to modify this code to produce just an enhanced image, without increasing the actual resolution? There may be cases where the input image have good enough resolution to contain necessary details, but image quality is somehow detoriated. In this case, just image restoration is needed, instead of higher resolution. Any ideas?

advaza commented 6 years ago

Joining the question! And specifically, the README example seems to do exactly that - the castle images before and after appear to have same size - how this image was created?

Thanks a lot! Adva

zsdonghao commented 6 years ago

Hi, you can delete the last subpixel convolution, then the output size is the same with the input size.

ontheway16 commented 6 years ago

This one?

n = SubpixelConv2d(n, scale=2, n_out_channel=None, act=tf.nn.relu, name='pixelshufflerx2/2')

zsdonghao commented 6 years ago

yeap

zsdonghao commented 6 years ago

I suggest you to read the paper before using this code ~

rahat-yasir commented 6 years ago

After removing SubpixelConv2d, this error is showing, found_var.get_shape())) ValueError: Trying to share variable SRGAN_d/ho/dense/W, but specified shape (4608, 1) and found shape (18432, 1). Do I need to remove anything from discriminator as well ?

bluewidy commented 6 years ago

@zsdonghao I acted as you said. However, the following error occurred : ValueError: Dimension 2 in both shapes must be equal, but are 256 and 64. Shapes are [1,1,256,3] and [1,1,64,3]. for 'Assign_171' (op: 'Assign') with input shapes: [1,1,256,3], [1,1,64,3]. What should I do? How should the source code be modified? I would appreciate your teaching.

ABDOELSHEMY commented 6 years ago

Same error here
ValueError: Dimension 2 in both shapes must be equal, but are 256 and 64. Shapes are [1,1,256,3] and [1,1,64,3]. for 'Assign_171' (op: 'Assign') with input shapes: [1,1,256,3], [1,1,64,3].

@zsdonghao could you more clarification please, and save us hours to solve this without any luck edit: i see here https://github.com/tensorlayer/srgan/issues/100, that it seem not that easy, it loke like a nightmare to make it works after all, thanks anyway

bluewidy commented 6 years ago

@ABDOELSHEMY You are having the same problem as me! I tried very hard, but eventually I gave up... can you help me please?

zsdonghao commented 6 years ago

Why you have this error? @ABDOELSHEMY ? you remove one subpixel and load the pre-trained model? If yes, you can't do like that, you need to re-train the model from scratch.

ABDOELSHEMY commented 6 years ago

oh, i understand now, Unfortunately that is excatly what i did, remove one subpixel and simply load the pre-trained model and trying so on more than 2 hours, thank you @zsdonghao for your great clarification.

@bluewidy, as zsdonghao said, we should re-train the model after remove one subpixel.

suke27 commented 6 years ago

@all, does somebody can show enhance image result(no enlarge), I am interested in it. thank you!

bluewidy commented 6 years ago

@zsdonghao It was a decisive answer! Thank you! But I have a question. Is it impossible to do that with a pre-trained model? re-train the model from scratch is cumbersome and time consuming.

zsdonghao commented 6 years ago

@bluewidy we always have to re-train it..

alternatively, to make the output size smaller, you can try adding a DownSamplingLayer to the output layer.

bluewidy commented 6 years ago

@zsdonghao Oh, really? That's sound great! I want to add a DownSamplingLayer! Where is output layer? model.py? or main.py?

bluewidy commented 6 years ago

@zsdonghao I found InputLayer on model.py. However, the OutputLayer is not visible.

zsdonghao commented 6 years ago

@bluewidy the output layer means the finial layer in the model.

bluewidy commented 6 years ago

@zsdonghao Aha... Thank you! I'm fool XD

bluewidy commented 6 years ago

@zsdonghao Well ... Adding the DownSamplingLayer seems very difficult ... I'll just choice re-train the model.

bluewidy commented 6 years ago

@zsdonghao I set up the main.py placeholder as follows.

t_image = tf.placeholder('float32', [batch_size, 96, 96, 3], name='t_image_input_to_SRGAN_generator') t_target_image = tf.placeholder('float32', [batch_size, 96, 96, 3], name='t_target_image')``

and I set up the utils.py as follow.

def crop_sub_imgs_fn(x, is_random=True): x = crop(x, wrg=96, hrg=96, is_random=is_random) x = x / (255. / 2.) x = x - 1. return x

def downsample_fn(x):

We obtained the LR images by downsampling the HR images using bicubic kernel with downsampling factor r = 4.

x = imresize(x, size=[96, 96], interp='bicubic', mode=None)
x = x / (255. / 2.)
x = x - 1.
return x``

In this case, the size of the input and output are the same. This will not increase the resolution, but will improve the quality of the image. Right?

ontheway16 commented 6 years ago

Following.

suke27 commented 6 years ago

@bluewidy, you enlarge from LR, then resize to original image. I don't think it will improve the quality of image. if any progress Plz share to us.

bluewidy commented 6 years ago

This is original LR Image.

valid_lr

and this is SRGAN Image.

valid_gen

@suke27 here is share my result. It's really awesome.

suke27 commented 6 years ago

@bluewidy, based on your result, it looks like worse than original image

HybridDog commented 6 years ago

I've upscaled the picture with waifu2x (factor 2) and then downscaled it with ssim based perceptual downscaling. The result looks slightly different than the original: res difference: res

I think it is possible to use SRGAN instead of waifu2x.

ABDOELSHEMY commented 6 years ago

@bluewidy how did you get this result, are you add a DownSamplingLayer, or removing SubpixelConv2d and retrain the model

bluewidy commented 6 years ago

@ABDOELSHEMY How do I add a DownSamplingLayer? Can you show me example?

ABDOELSHEMY commented 6 years ago

@bluewidy Unfortunately I did not success to make it work as I am not a professional in this field, if I success to make it work any time I will let you know

bluewidy commented 6 years ago

@ABDOELSHEMY Thank you! XD

nerdsang commented 5 years ago

I met the problem too.After removing SubpixelConv2d, this error is showing, found_var.get_shape())) ValueError: Trying to share variable SRGAN_d/ho/dense/W, but specified shape (2048, 1) and found shape (18432, 1). What should i do? @zsdonghao

PonderK commented 5 years ago

@bluewidy hello,i used your method, but it faild. Can you share your code with me. Please contact me,thank you very much

bluewidy commented 5 years ago

Hello. I can feel your longing. But I have some unfortunate news for you. I've been trying to reduce the resolution a long time ago. But my method eventually failed. I'm not an expert. A common person who has little knowledge about computer programming. Do not use my method. That was the wrong way. And it's been a long time since I gave up. And I formatted my computer. so I don't have any source code. I am sorry to tell you the sorry news. The person who can help you is zsdonghao. You can ask him for help. Find him.

PonderK commented 5 years ago

@bluewidy ok ,thanks for your opinion, i will ask him for help! thanks again

PonderK commented 5 years ago

@zsdonghao I have the same problem about this,i tried the bluewidy's method above. But it didn't run. i don't know what's wrong with me, Can you give me some suggestion? Thank you very much!

huyangdi commented 4 years ago

为什么会有这个错误?@ABDOELSHEMY?您删除一个亚像素并加载预训练的模型?如果是,则不能那样做,您需要从头开始重新训练模型。

Aren't there two?