assafshocher / ZSSR

"Zero-Shot" Super-Resolution using Deep Internal Learning
Other
399 stars 102 forks source link

Unable to run 8 bit grayscale png images #15

Open IamGroot19 opened 4 years ago

IamGroot19 commented 4 years ago

Hi, the code works fine with all 24-bit images (both jpg & png) but when I tried to run it for 8 bit images (including the boat image in set14 i.e. 'img_003_SRF_2_LR.png'), I am getting some errors.

run run_ZSSR.py C:\ProgramData\Anaconda3\lib\site-packages\h5py__init__.py:72: UserWarning: h5py is running against HDF5 1.10.2 when it was built against 1.10.3, this may cause problems '{0}.{1}.{2}'.format(*version.hdf5_built_version_tuple) no kernel loaded ['C:\Users\bhara\Downloads\socher_original\ZSSR-master/test_data\img_003_SRF_2_LR_0.mat;'] *** 0 WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\tf_should_use.py:193: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use tf.global_variables_initializer instead. Start training for sf= [2.0, 2.0] ** Traceback (most recent call last):

File "C:\Users\bhara\Downloads\socher_original\ZSSR-master\run_ZSSR.py", line 75, in main(conf_str, gpu_str)

File "C:\Users\bhara\Downloads\socher_original\ZSSR-master\run_ZSSR.py", line 69, in main run_ZSSR_single_input.main(input_file, ground_truth_file, kernel_files_str, gpu, conf_name, res_dir)

File "C:\Users\bhara\Downloads\socher_original\ZSSR-master\run_ZSSR_single_input.py", line 25, in main net.run()

File "C:\Users\bhara\Downloads\socher_original\ZSSR-master\ZSSR.py", line 107, in run self.train()

File "C:\Users\bhara\Downloads\socher_original\ZSSR-master\ZSSR.py", line 311, in train self.train_output = self.forward_backward_pass(self.lr_son, self.hr_father)

File "C:\Users\bhara\Downloads\socher_original\ZSSR-master\ZSSR.py", line 222, in forward_backward_pass feed_dict)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 950, in run run_metadata_ptr)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1173, in _run feed_dict_tensor, options, run_metadata)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_run run_metadata)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call raise type(e)(node_def, op, message)

InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: transpose expects a vector of size 3. But input(1) is a vector of size 4 [[{{node layer_1-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] [[add/_15]] (1) Invalid argument: transpose expects a vector of size 3. But input(1) is a vector of size 4 [[{{node layer_1-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] 0 successful operations. 0 derived errors ignored.

Just to be sure, I ran in it in the command line and got the same error messages. Any ideas?

Thanks.

assafshocher commented 4 years ago

Hi, thanks for commenting.

Supporting a grayscale image means slightly modifying the network architecture. This can be done in the config file, line 55. Just change the first conv-layer to get 1 input channel: [3, 3, 1, self.width]. (note that the image still needs to have channels-axis but it will be a singleton.

Another solution is to concatenate the image to itself channel-wise to get an RGB image that looks the same as your grayscale prior to inputting it to ZSSR.

In the future I will ad automatic support when identifying a grayscale image.

Thanks!

IamGroot19 commented 4 years ago

Hi, thanks for the quick response.

I tried both of your solutions.

  1. I am unable to save a grayscale image (say MxN) as a 3 channel image (MxNx1) using the PIL library. I am able to save it using cv2 library but in that case, I still get the same error while running the ZSSR code, even after modifications in line55, Configs.py. ( Exact code would be:

           self.filter_shape = ([[3, 3, 1, self.width]] +
                                           [[3, 3, self.width, self.width]] * (self.depth-2) +
                                            [[3, 3, self.width, 3]])

    )

  2. I tried concatenating the same image along axis=2 thrice to get a grayscale image but having 3-channel. This approach resulted in the image becoming more blurry and I don't want to introduce any unwanted changes to the image.

Any ideas? Thanks

IamGroot19 commented 4 years ago

Another query. Although currently, I am facing trouble in running the code for any grayscale image, my end objective is to use the model for small images (For example, to obtain 100x100 images from 25x25 images). Is there anything specific I must take care of in order to run code for such small images?

assafshocher commented 4 years ago
  1. Try adding a singleton dim to the image read, at ZSSR.py ln 66, change to something like img.imread(input_img)[:, :, None]. You can also add a condition adding this singleton only if needed (bonus: and modifying the conf so that the architecture fits). if that works you can contrib.

  2. That is odd, because the transformation from grayscale to rgb is done exactly like this and should produce any blurring effects.

  3. 25x25 can be a bit extreme. This is internal learning so the data-size is the size of the image. we have seen stable results at 40x40 for x2 SR. Going more extreme can work, depending on how texture-like your input image is (more formally: how big is the entropy of the patch-distribution). But I have no guarantees there, especially for x4. One tip is to use the gradual mode (which takes more time). It seems that it can be significant in your case.