nagadomi / waifu2x

Image Super-Resolution for Anime-Style Art
http://waifu2x.udp.jp/
MIT License
27.33k stars 2.71k forks source link

Does model training exist in an additive manner #355

Open CarbonPool opened 3 years ago

CarbonPool commented 3 years ago

For the same model, some additional images are added, an image list is generated, and then image data is generated for training. Does the model remember the image data of the last training?

nagadomi commented 3 years ago

I don't fully understand the meaning of your question. You can load the trained model parameters specified by -resume <model_file.t7> option. Related to: https://github.com/nagadomi/waifu2x/issues/209

I also use this option for training each noise level. In noise scale 0-3 training, ${MODEL_DIR}/scale2.0x_model.t7 is loaded as an initial parameter with the -resume option. https://github.com/nagadomi/waifu2x/blob/d5171bcba9b8eebe295b023747c99b20d6c24e85/appendix/train_cunet_art.sh#L10-L37

CarbonPool commented 3 years ago

I don't fully understand the meaning of your question. You can load the trained model parameters specified by -resume <model_file.t7> option. Related to: #209

I also use this option for training each noise level. In noise scale 0-3 training, ${MODEL_DIR}/scale2.0x_model.t7 is loaded as an initial parameter with the -resume option. https://github.com/nagadomi/waifu2x/blob/d5171bcba9b8eebe295b023747c99b20d6c24e85/appendix/train_cunet_art.sh#L10-L37

Can you explain to me the meaning of "save_history"? I haven't seen the detailed parameter document about train.lua. If the "save_history" parameter is not used, does the learning rate start again when I train this model again?

nagadomi commented 3 years ago

When -save_history 1 is specified, all model files with updated best validation scores will be saved with different filenames. When -save_history 0 is specified(default) , only the model file with the highest validation score will be saved.

leraning rate

The learning rate starts from the default value (0.00025) unless you specify the -learning_rate option. If you want fine tuning, specify a small value.

CarbonPool commented 3 years ago

This is the output of each round of training:

[======================================= 134528/134528 ===============================>] Tot: 19m2s | Step: 8ms { loss : 0.00035312999208403 MSE : 0.00071552308758262 PSNR : 32.32026436039 }

validation

[======================================= 21120/21120 =================================>] Tot: 1m37s | Step: 4ms

Does the "best" field represent the latest learning rate?

nagadomi commented 3 years ago

No. It is lowest MSE(mean squared error). The learning rate is displayed in the cosole output for each round.

# 2 
learning rate: 0.00024853029126955  
CarbonPool commented 3 years ago

No. It is lowest MSE(mean squared error). The learning rate is displayed in the cosole output for each round.

# 2   
learning rate: 0.00024853029126955    

I didn't see the "learning rate" field, this is the output from the beginning of the script:

th train.lua -model_dir models/my_model -method noise -noise_level 2 -learning_rate 0.00029588596682772 -test images/3688-gigapixel-scale-1_50x.tif { grayscale : false thread : -1 name : "user" loss : "huber" random_erasing_rect_min : 8 use_transparent_png : false model : "vgg_7" random_erasing_rate : 0 random_pairwise_negate_x_rate : 0 resume_epoch : 1 downsampling_filters : { 1 : "Box" 2 : "Lanczos" 3 : "Sinc" } resume : "" crop_size : 48 random_pairwise_rotate_min : -6 random_blur_size : "3,5" random_pairwise_scale_max : 1.176 random_blur_rate : 0 random_pairwise_negate_rate : 0 nr_rate : 0.65 oracle_drop_rate : 0.5 inner_epoch : 4 invert_x : false epoch : 50 update_criterion : "mse" pairwise_flip : true jpeg_chroma_subsampling_rate : 0.5 image_list : "./data/image_list.txt" oracle_rate : 0.1 active_cropping_tries : 10 backend : "cunn" random_pairwise_scale_min : 0.85 active_cropping_rate : 0.5 batch_size : 16 random_unsharp_mask_rate : 0 max_size : 256 validation_crops : 200 plot : false random_erasing_rect_max : 32 resize_blur_max : 1.05 gpu : { 1 : 1 } random_pairwise_scale_rate : 0 random_pairwise_rotate_rate : 0 random_color_noise_rate : 0 validation_filename_split : false images : "./data/images.t7" model_file : "models/my_model/noise2_model.t7" resize_blur_min : 0.95 padding_y_zero : false test : "images/3688-gigapixel-scale-1_50x.tif" learning_rate_decay : 3e-07 method : "noise" save_history : false color : "rgb" seed : 11 pairwise_y_binary : false model_dir : "models/my_model" style : "art" data_dir : "./data" noise_level : 2 random_half_rate : 0 validation_rate : 0.05 random_overlay_rate : 0 max_training_image_size : -1 padding : 0 padding_x_zero : false scale : 2 random_pairwise_rotate_max : 6 learning_rate : 0.00029588596682772 random_blur_sigma_max : 1 patches : 64 random_erasing_n : 1 random_blur_sigma_min : 0.5 }

0 small images are removed

make validation-set

load .. 2102============================= 100/110 =============================>........] ETA: 1s81ms | Step: 108ms

1

resampling

[======================================= 2102/2102 ===================================>] Tot: 48s274ms | Step: 23ms

update

[======================================= 75969/134528 .................................] ETA: 8m29s | Step: 8ms [======================================= 134528/134528 ===============================>] Tot: 19m2s | Step: 8ms { loss : 0.00035312999208403 MSE : 0.00071552308758262 PSNR : 32.32026436039 }

validation

[======================================= 21120/21120 =================================>] Tot: 1m37s | Step: 4ms

CarbonPool commented 3 years ago

Sorry, I think I misunderstood the meaning, I thought that each "validation" represents a round, it seems that the next round is really long

nagadomi commented 3 years ago

It is shown after # 2. -inner_epoch 4 by default, so it takes 1 round with 4 update+validation.

CarbonPool commented 3 years ago

It is shown after # 2. -inner_epoch 4 by default, so it takes 1 round with 4 update+validation.

Thank you very much for your warm help, I have learned a lot.