Closed djdjoko closed 8 years ago
I think resolution of image is not so important. For example, models/ukbench
is trained on ukbench dataset. This dataset contains 640x480 images. In my benchmark, ukbench model's performance is very close to photo model's performance.
If you trying to train a denoising model, be careful of JPEG setting of dataset. chroma-subsampling=4:4:4
and quality>95
is may be required.
Thanks! How do you measure model performance?
If I understand correctly the feature set is more important then image size. Lets say we train a model_portrait for portraits (5000 images), one model_nature for nature (5000 images) and a model_portrait_nature (superset of 10000 images). Would model_portrait_nature have same/ better/ worse performance as the individual models?
Also would you expect e.g. model_portrait that is specificallyl trained on portraits to be significantly better in upscaling portraits compared to ukbench?
I used tools/benchmark.lua
for benchmark. It shows PSNR and MSE. baseline
is Bicubic.
$ th tools/benchmark.lua -model1_dir models/ukbench -model2_dir models/photo -method scale -dir /path/to/benchmark/dataset
99/99; baseline_rmse=13.559323, model1_rmse=9.934793, model2_rmse=9.884656, baseline_psnr=26.851980, model1_psnr=30.009167, model2_psnr=30.056692
At the very least, models/anime_style_art_rgb
makes obviously different result from models/photo
.
I am trying to train a dataset to get similar results like you did with the new photo model you implemented. I read in previous posts that you have trained it using 5000 high resolution images. Would you mind sharing what dataset you used and what the average size was of the input pictures? Would you expect any significant improvement using a larger dataset with higher resolution images?