yu4u / noise2noise

An unofficial and partial Keras implementation of "Noise2Noise: Learning Image Restoration without Clean Data"
MIT License
1.07k stars 237 forks source link

Hi ! Any kinds of pictures can be fit for your net? #38

Open wqz960 opened 5 years ago

wqz960 commented 5 years ago

thank you for your wonderful codes! And I want to test your code on another dataset , which is about 3000+ pictures, do I need to do some data argument? And I resize all the pictures to 450*450 pixels. Is that suitable?

yu4u commented 5 years ago

Try it! The models in this repo were basically trained using images with "ideal" noises. Therefore, it might not work well for real noises.

wqz960 commented 5 years ago

@yu4u I used your script command: python3 train.py --image_dir dataset/291 --test_dir dataset/Set14 --image_size 128 --batch_size 8 --lr 0.001 --source_noise_model text,0,50 --target_noise_model text,0,50 --val_noise_model text,25,25 --loss mae --output_path text_noise python3 train.py --image_dir dataset/291 --test_dir dataset/Set14 --image_size 128 --batch_size 8 --lr 0.001 --source_noise_model impulse,0,95 --target_noise_model impulse,0,95 --val_noise_model impulse,70,70 --loss l0 --output_path impulse_noise and use --model unet for training, but when I test the result , why they are noised all by white Gaussian noise? i see that result image, the middle image is noised by white Gaussian . Why?

yu4u commented 5 years ago

Please refer to README.

python3 test_model.py --weight_file [trained_model_path] --image_dir dataset/Set14

optional arguments:
  -h, --help            show this help message and exit
  --image_dir IMAGE_DIR
                        test image dir (default: None)
  --model MODEL         model architecture ('srresnet' or 'unet') (default:
                        srresnet)
  --weight_file WEIGHT_FILE
                        trained weight file (default: None)
  --test_noise_model TEST_NOISE_MODEL
                        noise model for test images (default: gaussian,25,25)
  --output_dir OUTPUT_DIR
                        if set, save resulting images otherwise show result
                        using imshow (default: None)
wqz960 commented 5 years ago

@yu4u I am new in denoise and image reconstruction, the function for calculating PSNR is right? def cal_PSNR(pred_img, gt_img): return 10.0 * np.log10((255.0 ** 2) / (np.mean(np.square(pred_img - gt_img)))) can you help me check this? I test the model with random-impulse noise, I do not think the denoisy images are good, but the PSNR calculate from the function is almost 25. So are there some problems? Thank you!

yu4u commented 5 years ago

PSNR=25 indicates poor image quality as you felt.

wqz960 commented 5 years ago

Hi! @yu4u I am sorry for interrupting you again! But there is a problem about Evaluation. the PSNR from the validation can see from the h5df file. And I use PSNR formula on the Internet to calculate the PSNR again, but the two values are not the same. why? the PSNR formula on the Internet: def cal_PSNR(pre,gt): return 10.0*np.log10((255.0 ** 2) / (np.mean(np.square(pre-gt)))

yu4u commented 5 years ago

The two implementations are same. ... Is this not what you meant?

    a = np.random.uniform(0, 255, (100, 100, 3))
    b = np.random.uniform(0, 255, (100, 100, 3))
    print(cal_PSNR(a, b))
    sess = tf.Session()
    r = PSNR(tf.constant(a), tf.constant(b))
    with sess.as_default():
        print(r.eval())
wqz960 commented 5 years ago

@yu4u after training, I see the value from the filename, such as the val_PSNR is 25, but I test the images which are validate dataset again, but the PSNR calculate by the following formula is not the same with yours. I do not know why, so this is the question... hoping for your kind respond.

yu4u commented 5 years ago

How different? If the difference is not so large, it would have been caused by the randomness of the noisy images. The noisy images are created online without fixing the random seed (yes this should be fixed for reproducibility).

wqz960 commented 5 years ago

The difference is big, I used your code on my own dataset, the performance is good! But the only thing confused me is that the values of val_PSNR and PSNR that I tested latter are not the same, the val_PSNR is almost 27, but the test PSNR using the formula above is almost 32. I am sure the val_folder and test_folder are the same..... and for comparison, I have crop my images all to 1281283 size. The random seed? The val generator calculated in order, is it related to randomness? @yu4u

yu4u commented 5 years ago

Hmm, something is wrong. As you indicated, PSNR calculated in test_model.py is higher than that in training log...

wqz960 commented 5 years ago

yes! But I have debugged the Val_inputimg, the input seems correct, for API in keras, I don’t know how to fix it. Can you help me for this? I need the validation log for showing something.Thank you!