Open Shakarim94 opened 6 years ago
Isn't that extremely small?
It depends. For example, the number of images required for a super resolution task is relatively small. Denoising task, the task noise2noise is going to solve, is a task similar to super resolution, thus I think the number of images required for training is not so large.
Anyway, it is better to increase the training data if the performance matters. You can simply set --image_dir
to the directory including a large number of images.
P.S. how much time does it take to train the network for gaussian noise?
About six hours for 60 epoch training on GTX 1080.
Is the time taken to train just 291 images or did you train it with larger dataset. Also can you tell me the RAM of your machine? Thanks in advance.
Is the time taken to train just 291 images or did you train it with larger dataset.
All of the results in this repository are come from 291 images.
Also can you tell me the RAM of your machine? Thanks in advance.
32GB. The training script does not require large amount of memory.
how can i get that data set of 291 images
Please refer to README.
Please correct me if I'm wrong. I can see that the training dataset consists of only 291 images. Isn't that extremely small? The original paper uses 50k images of the size 256x256. I don't think that 291 images are enough to achieve the desirable result.
P.S. how much time does it take to train the network for gaussian noise?