yu4u / noise2noise

An unofficial and partial Keras implementation of "Noise2Noise: Learning Image Restoration without Clean Data"
MIT License
1.08k stars 234 forks source link

Ask about image_size #30

Open jjccyy opened 5 years ago

jjccyy commented 5 years ago

Hi i have a question What is image_size,My understanding is the length and width of the picture.When I train my own picture, I get an error.But I will not have errors with the data sets you provide.I read the papers are square pictures,i am using a rectangular image, and the length and width of the image are inconsistent.My biggest picture is 2338*1653.I want to know if the image size will make an impression on the training.thanks a lot

yu4u commented 5 years ago

args.image_size is the image patch size used in training. Image patch is created by randomly cropping training images.

jjccyy commented 5 years ago

thank you I still have a question. This training has requirements for the format of the image. I found that png seems to run an error.

yu4u commented 5 years ago

The latest version works for ".jpeg", ".jpg", ".png", ".bmp" images.

https://github.com/yu4u/noise2noise/blob/master/generator.py#L10

AllenJac commented 5 years ago

How many training patches (samples) are used in your case? Usually, if the patch size and stride are fixed, the total number of training pathces can be calculated. Instead, here we need to set the number of iterations? Any there any recommendation for iteration numbers?

yu4u commented 5 years ago

Training patches are randomly generated by selecting an image first, and then randomly cropping a patch from the image. Thus we should set the number of iterations and it can be arbitrary number (but small number of iterations increases validation overhead).

AllenJac commented 5 years ago

Yes, I understand this process. If the number of iterations is 1000, the training patches are 1000 after one epoch, the number of training patches are less than a fixed way (patch size and stride are fixed, and then sliding the patch), how could you make sure that less training patches still can achieve a very good result, so any there any recommendation for iteration numbers, such as a range?

yu4u commented 5 years ago

There is no meaning to consider the number of iterations. The accuracy depends on the number of iterations x the number of epochs. I recommend to use the number of iterations so that the processing time for validation becomes < 5 % of training time.

I'm very curious but not sure how much difference there is between a fixed way and a random way.

AllenJac commented 5 years ago

So what's the number of iterations, the number of epochs in your example for Gaussian noise?

yu4u commented 5 years ago

Default settings were used as all the command lines for training are described in README.

AllenJac commented 5 years ago

I read it, most parameter settings are in it, but there is no number of iterations, the number of epochs

yu4u commented 5 years ago

You can see default settings by simply executing:

python train.py -h
AllenJac commented 5 years ago

I saw it. The accuracy depends on the number of iterations x the number of epochs. In your example, the number of iterations x the number of epochs is much fewer than trainable parameters. So how could you make sure that less training patches still can achieve a very good result?

yu4u commented 5 years ago

I did not understand what you meant. You can check val_loss or val_PSNR to see whether the model converged or not.

AllenJac commented 5 years ago

OK, the number of iterations x the number of epochs is the total number of your training patches, right? Usually, number of training patches will larger than trainable parameters. But in your case, the number of training patches (samples) is much fewer than trainable parameters, why it still works well?

yu4u commented 5 years ago

OK, the number of iterations x the number of epochs is the total number of your training patches, right?

No. The number of iterations x the number of epochs x batch size

Usually, number of training patches will larger than trainable parameters.

I do not think so. It highly depends on task, model, image size, and so on.

jjccyy commented 5 years ago

@yu4u About removing watermark, I would like to ask about the problem of removing the watermark. I am using the text loss. I found that if the mask is used and the watermark covers the background, the watermark can be removed, but if the watermark is translucent, then why?

yu4u commented 5 years ago

Please create a new issue for a different subject and close the fixed issue.