jaentrouble / anime60

60frame anime
GNU General Public License v3.0
1 stars 0 forks source link

Training records #1

Open jaentrouble opened 4 years ago

jaentrouble commented 4 years ago
  1. hr538e2_bugged

    • subtracted 1 instead of adding 1
  2. hr538s20k

    • fixed bug of 1
  3. hr538s20k_reg_lowlr

    • added regularization term

    • lr: lr_mul_inv_low

  4. hr538s20k_reg_lowlr_lowgamma & hr538s20_120k_reg_lowlr_lowgamma

    • gamma_flow, gamma_mask = 0.001, 0.0005

    • lr : \~20k : lr_mul_inv_low, 20k\~ : lr_step7

    • abort at epoch44 (45th)

jaentrouble commented 4 years ago
  1. hr538s100k_noreg

    • gamma_flow, gamma_mask = 0.0, 0.0

    • BN_momentum = 0.99999

    • lr : lr_step7

    • abort at epoch 4 (5th)

jaentrouble commented 4 years ago
  1. hr538s100k_real

    • gamma_flow, gamma_mask = 0.0001, 0.00005

    • BN_momentum = 0.99999

    • lr : lr_step7

    • changed to real data

    • gamma/brightness/contrast/rgbshift excluded

    • aborted at epoch 57 (58th)

jaentrouble commented 4 years ago
  1. hr538s58_100k_real

    • gamma_flow, gamma_mask = 0.000001, 0.0000005

    • BN_momentum = 0.99999

    • lr: low_lr (1e-5 constant)

    • aborted at epoch 1 (2nd)

jaentrouble commented 4 years ago
  1. hr3216_100k_real

    • gamma_flow, gamma_mask = 0.000001, 0.0000005

    • BN_momentum = 0.99999

    • lr: lr_step7

    • image shape changed to (960,540)

    • batch size 5

    • abort at epoch 20 (21st)

jaentrouble commented 4 years ago
  1. hr3216_21_100k_real

    • loss fixed

    • gamma_flow, gamma_mask = 0.01, 0.005

    • same as above

    • forgot to load; it is learned from the scratch

jaentrouble commented 4 years ago
  1. hr3216_merged

    • merged flow

    • other things same as above

    • abort

jaentrouble commented 4 years ago
  1. hr3216_half

    • frame doubler (For sanity check)

    • abort at epoch 23 (24th)

jaentrouble commented 4 years ago
  1. hr3216_half_lowerlr

    • lr : lr_step8

    • abort at epoch 8 (9th)

jaentrouble commented 4 years ago
  1. hr3216_half_noaug

    • lr: lr_step7

    • Only flipping and rotating

    • Checked flow -> flow does exist, but very little

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_200

    • lr: lr_step7

    • load from 12th epoch 99 (100th)

    • gamma: 0.01 & 0.005

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_300

    • lr: lr_step7

    • load from 13th epoch 99 (100th)

    • gamma: 0.03 & 0.015

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_400

    • lr: lr_step7

    • load from 14th epoch 99 (100th)

    • gamma: 0.1 & 0.015

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_500

    • load from 15th epoch 99 (100th)

    • anime-like data added

    • other things same as 15th

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_600

    • load from 16th epoch 99 (100th)

    • changed to epoch 20 * steps 5000

    • other things unchanged

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_700

    • load from 17th epoch 19 (20th)

    • added @tf.function to bilinear interpretation

    • other things unchanged

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_850

    • load from 18th epoch 19 (20th)

    • lr: lr_step9

    • gamma: 0.05 & 0.015

    • other things unchanged

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_950

    • load from 19th epoch 29 (30th)

    • batch size: 7

    • other things unchanged

jaentrouble commented 4 years ago
  1. hr3216_half_noaug_1000

    • load from 20th epoch 19 (20th)

    • other things unchanged

jaentrouble commented 4 years ago
  1. hr3216_half_bilinear_200

    • New training

    • changed deconvolution layer to bilinear upscaling

    • changed final convolution layer kernel size to 3x3 (from 1x1)

    • gamma: 0.05 & 0.015

    • batch size: 6

    • lr: lr_step7

jaentrouble commented 4 years ago
  1. hr3216_half_bilinear_260

    • load from 22nd epoch 39 (40th)

    • lr: lr_mul_inv_low

jaentrouble commented 4 years ago
  1. hr3216_half_bilinear_400

    • load from 23rd epoch 11 (12th)

    • lr: lr_step9

jaentrouble commented 4 years ago
  1. hr3216_half_bilinear_500

    • load from 24th epoch 27 (28th)

    • lr: lr_step9

    • gamma: 0.1 & 0.03

jaentrouble commented 3 years ago
  1. hr3216_half_bilinear_600

    • load from 25th epoch 20

    • lr: lr_step9

jaentrouble commented 3 years ago
  1. hr3216_half_bilinear_700

    • load from 26th epoch 20

    • lr: lr_step9

jaentrouble commented 3 years ago
  1. hr3216_half_bilinear_800

    • load from 27th epoch20

    • lr: lr_step9

jaentrouble commented 3 years ago
  1. hr3216_half_bilinear_900

    • load from 28th epoch 20

    • lr: low_lr

jaentrouble commented 3 years ago
  1. hr3216_half_bilinear_1000

    • load from 29th epoch 20

    • lr: lr_step9

    ~gamma: 100.0 & 50.0~ : Abort

    • Restart with gamma 0.01 & 0.005
jaentrouble commented 3 years ago
  1. hr3216_half_ani_100

    • load from 30th epoch 20

    • Now training only with ani-like videos

    • No resize augmentation. Only flipping

    • lr: lr_step9

    • Other things untouched

jaentrouble commented 3 years ago
  1. hr3216_half_ani_mv_100

    • load from 30th epoch 20 (31st was not so successful)

    • added shifting windows

    • parallel dataset added

    • lr: lr_step9

    • other things untouched

jaentrouble commented 3 years ago
  1. hr3216_half_ani_mv_200

    • load from 32th epoch 20

    • lr: low_lr

    • BN_MOMENTUM = 0.999 (0.99999 seems too slow)

jaentrouble commented 3 years ago
  1. hr3216_half_ani_mv_400

    • load from 33th epoch 20

    • lr: low_lr

    • epoch: 40 steps: 5000

jaentrouble commented 3 years ago

Change strategy

jaentrouble commented 3 years ago
  1. ehrb0_100

    • Used ehrb0_143_32

    • batch size: 24

    • lr: lr_step7

    • epoch: 100 steps 1000

    • gamma: 0.1 & 0.03 ~(NaN started after changing to 0.01&0.005 - need to check)~

      • gamma does not prevent NaN

~Changed last layer activation from tanh to relu (-1,1)~

Rolled back to tanh

jaentrouble commented 3 years ago
  1. ehrb0_200

    • continue from 35th epoch 100

    • lr: low_lr

jaentrouble commented 3 years ago
  1. ehrb0_500

    • continue from 36th epoch 100

    • lr: low_lr

    • epoch:300 steps 1000

Abort at epoch 225

jaentrouble commented 3 years ago

Change strategy

jaentrouble commented 3 years ago
  1. ehrb0_resize_300
    • new start, using the strategy above
    • lr: lr_step7_3
    • batch size: 12
    • training frame size: (960,540)
    • flow_map_size: (512,288)
    • epoch 300, steps 1000
jaentrouble commented 3 years ago

Why weird 'holes'?

Things to try

jaentrouble commented 3 years ago
  1. ehrb0_double_300
    • New start, using strategy above
    • lr: lr_step8_2 (1/100 of lr_step7_3)
    • doubled flow; range (-2,2)
    • delta_max : 60
    • training frame size: (960,540)
    • flow_map_size: (512,288)
    • epoch 300, steps 1000
    • batch size: 12

Abort at epoch 79

jaentrouble commented 3 years ago

Lowering learning rate made it worse

jaentrouble commented 3 years ago
  1. ehrb0_zero_300
jaentrouble commented 3 years ago
  1. ehrb0_edge_pre_100
    • New start
    • Pre training with edge maps (For later cyclic training)
    • lr: lr_step7_2
    • epoch 100 steps 1000
    • batch size: 16
jaentrouble commented 3 years ago
  1. cyclic_100
    • Load from 41
    • Using full cyclic training model
    • lr: lr_step7_2
    • epoch 100 steps 1000
    • batch size : 8
jaentrouble commented 3 years ago
  1. ehrb0_edge_pre2_100
    • New start
    • Found that there was a big mistake in functional HR model
    • lr: lr_step7_2
    • epoch 100 steps 1000
    • batch size: 16