yu4u / noise2noise

An unofficial and partial Keras implementation of "Noise2Noise: Learning Image Restoration without Clean Data"
MIT License
1.07k stars 235 forks source link

Small number of training images #11

Open Shakarim94 opened 6 years ago

Shakarim94 commented 6 years ago

Please correct me if I'm wrong. I can see that the training dataset consists of only 291 images. Isn't that extremely small? The original paper uses 50k images of the size 256x256. I don't think that 291 images are enough to achieve the desirable result.

P.S. how much time does it take to train the network for gaussian noise?

yu4u commented 6 years ago

Isn't that extremely small?

It depends. For example, the number of images required for a super resolution task is relatively small. Denoising task, the task noise2noise is going to solve, is a task similar to super resolution, thus I think the number of images required for training is not so large. Anyway, it is better to increase the training data if the performance matters. You can simply set --image_dir to the directory including a large number of images.

P.S. how much time does it take to train the network for gaussian noise?

About six hours for 60 epoch training on GTX 1080.

shivamsaboo17 commented 6 years ago

Is the time taken to train just 291 images or did you train it with larger dataset. Also can you tell me the RAM of your machine? Thanks in advance.

yu4u commented 6 years ago

Is the time taken to train just 291 images or did you train it with larger dataset.

All of the results in this repository are come from 291 images.

Also can you tell me the RAM of your machine? Thanks in advance.

32GB. The training script does not require large amount of memory.

akkshita commented 5 years ago

how can i get that data set of 291 images

yu4u commented 5 years ago

Please refer to README.