caiyuanhao1998 / PNGAN

"Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware Adversarial Training" (NeurIPS 2021)
https://arxiv.org/abs/2204.02844
MIT License
130 stars 23 forks source link

Log Files from Training #4

Closed gauenk closed 1 year ago

gauenk commented 2 years ago

Thank you for your awesome code!

I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch (and/or batch) with an estimate of the runtime?

mengchuangji commented 2 years ago

Thank you for your awesome code!

I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch (and/or batch) with an estimate of the runtime?

Configuration: Namespace(act='relu', batch_size=8, benchmark_noise=False, beta1=0.9, beta2=0.9999, chop=False, cpu=True, data_test='SIDD', data_train='SIDD', debug=False, decay_type='step', dir_data='/home/shendi_mcj/datasets/SIDD_128/Datasets', epochs=100, epsilon=1e-08, ext='sep_reset', extend='.', gamma=0.5, gan_k=1, generate=False, load='.', load_best=False, load_dir='.', load_epoch=27, load_models=True, loss='1*L1', lr=0.0002, lr_decay_step=100000.0, lr_min=1e-07, model='RIDNET', momentum=0.9, n_GPUs=1, n_colors=3, n_feats=64, n_threads=8, n_train=20000, n_val=2000, noise=50, noise_g=[1], optimizer='ADAM', partial_data=True, patch_size=128, pre_train='experiment/ridnet.pt', precision='single', predict_patch_size=800, print_every=100, print_model=False, reduction=16, res_scale=1, reset=False, resume=0, rgb_range=255, save='./', save_models=True, save_results=False, savepath='./save', seed=1, self_ensemble=False, shift_mean=True, skip_threshold=1000000.0, split_batch=1, template='.', test_every=1000, test_only=False, testpath='./test', timestamp='1665173019', weight_decay=0.8)

Making model... Loading model from experiment/ridnet.pt Load Model from epoch: 27 Epoch 28:g_trn_l=4260.0166,d_trn_l=-8.6347: 100%|█| 2500/2500 [18:04<00:00, 2.3 val_d_loss=-16553.590091705322, val_g_loss=1456043.0610351562: 100%|█| 250/250 [ Epoch 29: val_d_loss=-8.276795045852662, val_g_loss=728.0215305175781 Epoch 29:g_trn_l=612.9119,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:27<00:00, 2.39 val_d_loss=-16508.084072113037, val_g_loss=1441245.7890625: 100%|█| 250/250 [00: Epoch 30: val_d_loss=-8.254042036056518, val_g_loss=720.62289453125 Epoch 30:g_trn_l=569.982,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:29<00:00, 2.38i val_d_loss=-16534.407222747803, val_g_loss=1445840.658203125: 100%|█| 250/250 [0 Epoch 31: val_d_loss=-8.267203611373901, val_g_loss=722.9203291015625 Epoch 31:g_trn_l=642.1012,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:27<00:00, 2.39 val_d_loss=-16530.938148498535, val_g_loss=1451748.1240234375: 100%|█| 250/250 [ Epoch 32: val_d_loss=-8.265469074249268, val_g_loss=725.8740620117187 Epoch 32:g_trn_l=1922.5544,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:29<00:00, 2.3 val_d_loss=-16461.000728607178, val_g_loss=1447958.69140625: 100%|█| 250/250 [00 Epoch 33: val_d_loss=-8.23050036430359, val_g_loss=723.979345703125 Epoch 33:g_trn_l=596.3593,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:28<00:00, 2.38 val_d_loss=-16396.410007476807, val_g_loss=1444418.5522460938: 100%|█| 250/250 [ Epoch 34: val_d_loss=-8.198205003738403, val_g_loss=722.2092761230468 Epoch 34:g_trn_l=4094.3552,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:33<00:00, 2.3 val_d_loss=-16325.490299224854, val_g_loss=1430286.7797851562: 100%|█| 250/250 [ Epoch 35: val_d_loss=-8.162745149612427, val_g_loss=715.1433898925782