Open jaentrouble opened 4 years ago
hr538s100k_noreg
gamma_flow, gamma_mask = 0.0, 0.0
BN_momentum = 0.99999
lr : lr_step7
abort at epoch 4 (5th)
hr538s100k_real
gamma_flow, gamma_mask = 0.0001, 0.00005
BN_momentum = 0.99999
lr : lr_step7
changed to real data
gamma/brightness/contrast/rgbshift excluded
aborted at epoch 57 (58th)
hr538s58_100k_real
gamma_flow, gamma_mask = 0.000001, 0.0000005
BN_momentum = 0.99999
lr: low_lr (1e-5 constant)
aborted at epoch 1 (2nd)
hr3216_100k_real
gamma_flow, gamma_mask = 0.000001, 0.0000005
BN_momentum = 0.99999
lr: lr_step7
image shape changed to (960,540)
batch size 5
abort at epoch 20 (21st)
hr3216_21_100k_real
loss fixed
gamma_flow, gamma_mask = 0.01, 0.005
same as above
forgot to load; it is learned from the scratch
hr3216_merged
merged flow
other things same as above
abort
hr3216_half
frame doubler (For sanity check)
abort at epoch 23 (24th)
hr3216_half_lowerlr
lr : lr_step8
abort at epoch 8 (9th)
hr3216_half_noaug
lr: lr_step7
Only flipping and rotating
Checked flow -> flow does exist, but very little
hr3216_half_noaug_200
lr: lr_step7
load from 12th epoch 99 (100th)
gamma: 0.01 & 0.005
hr3216_half_noaug_300
lr: lr_step7
load from 13th epoch 99 (100th)
gamma: 0.03 & 0.015
hr3216_half_noaug_400
lr: lr_step7
load from 14th epoch 99 (100th)
gamma: 0.1 & 0.015
hr3216_half_noaug_500
load from 15th epoch 99 (100th)
anime-like data added
other things same as 15th
hr3216_half_noaug_600
load from 16th epoch 99 (100th)
changed to epoch 20 * steps 5000
other things unchanged
hr3216_half_noaug_700
load from 17th epoch 19 (20th)
added @tf.function to bilinear interpretation
other things unchanged
hr3216_half_noaug_850
load from 18th epoch 19 (20th)
lr: lr_step9
gamma: 0.05 & 0.015
other things unchanged
hr3216_half_noaug_950
load from 19th epoch 29 (30th)
batch size: 7
other things unchanged
hr3216_half_noaug_1000
load from 20th epoch 19 (20th)
other things unchanged
hr3216_half_bilinear_200
New training
changed deconvolution layer to bilinear upscaling
changed final convolution layer kernel size to 3x3 (from 1x1)
gamma: 0.05 & 0.015
batch size: 6
lr: lr_step7
hr3216_half_bilinear_260
load from 22nd epoch 39 (40th)
lr: lr_mul_inv_low
hr3216_half_bilinear_400
load from 23rd epoch 11 (12th)
lr: lr_step9
hr3216_half_bilinear_500
load from 24th epoch 27 (28th)
lr: lr_step9
gamma: 0.1 & 0.03
hr3216_half_bilinear_600
load from 25th epoch 20
lr: lr_step9
hr3216_half_bilinear_700
load from 26th epoch 20
lr: lr_step9
hr3216_half_bilinear_800
load from 27th epoch20
lr: lr_step9
hr3216_half_bilinear_900
load from 28th epoch 20
lr: low_lr
hr3216_half_bilinear_1000
load from 29th epoch 20
lr: lr_step9
~gamma: 100.0 & 50.0~ : Abort
hr3216_half_ani_100
load from 30th epoch 20
Now training only with ani-like videos
No resize augmentation. Only flipping
lr: lr_step9
Other things untouched
hr3216_half_ani_mv_100
load from 30th epoch 20 (31st was not so successful)
added shifting windows
parallel dataset added
lr: lr_step9
other things untouched
hr3216_half_ani_mv_200
load from 32th epoch 20
lr: low_lr
BN_MOMENTUM = 0.999 (0.99999 seems too slow)
hr3216_half_ani_mv_400
load from 33th epoch 20
lr: low_lr
epoch: 40 steps: 5000
Instead of shoving whole frame, divide into pieces
Training is similar: Cut small portion of a frame.
Change input size to 320x320 and increase model size
Try Efficient HRNet
ehrb0_100
Used ehrb0_143_32
batch size: 24
lr: lr_step7
epoch: 100 steps 1000
gamma: 0.1 & 0.03 ~(NaN started after changing to 0.01&0.005 - need to check)~
~Changed last layer activation from tanh to relu (-1,1)~
Rolled back to tanh
ehrb0_200
continue from 35th epoch 100
lr: low_lr
ehrb0_500
continue from 36th epoch 100
lr: low_lr
epoch:300 steps 1000
hr538e2_bugged
hr538s20k
hr538s20k_reg_lowlr
added regularization term
lr: lr_mul_inv_low
hr538s20k_reg_lowlr_lowgamma & hr538s20_120k_reg_lowlr_lowgamma
gamma_flow, gamma_mask = 0.001, 0.0005
lr : \~20k : lr_mul_inv_low, 20k\~ : lr_step7
abort at epoch44 (45th)