Replace rectifier in the conv layer with more robust alternatives ( Current code: Leaky RELU )
PRELU
ELU
PELU
Adjust learning rate when user changes batch size ( Current code: no LR scaling with batch size )
lots of papers but simply, if you have a larger batch, you can afford to explore larger step sizes and train more quickly with the same model stability
Apply linear scaling factor to learning rate if batch size is increased ( doubling batch size double learning rate )
Explore using other optimizers in autoencoder ( Current code: Adam )
SGD with momentum
Cyclical Learning Rate
L4adam
Yellow Fin
Effective learning rate schedule ( Current code: no adjustment of LR after start or at stagnation )
best practice is to modify(lower) the learning rate or (raise) the batch size either at set intervals or after the loss has stagnated ( Note: for a constant mini-batch size of say 16 which is limited by GPU memory, you can still increase batch size, i.e. running on multiple GPUs or sequential GPU runs )
Dependent on model architecture ( normalization, batch size, regularization, better rectifiers, optimizers allow you to increase LR with same stability/accuracy
Suspect it is too low but current model has few of the tweaks which promote training stability
Also highly dependent on default batch size
Use keras.preprocessing.image.ImageDataGenerator ( Current code: random_transform and other )
more built-in transforms ( shear, skew, whitening etc. ) to create warpd images that are sent to trainer
built-in normalization for warped batches
integrated into keras model pipeline for queuing and paralleling
Creating a list to organize my own thoughts but I would love to hear everyone else's ideas and suggestions as well as what they're looking at
Ways to improve the faceswap mode( accuracy, speed, robustness to outliers, model convergence )
Improved face detection options ( Current code: dlib/CNN+mmod )
Improved face recognition ( Current code: ageitgey/face_recognition )
Improved face alignment ( Current code: 1adrianb/face-alignment )
Add Batch Normalization after Conv and Dense Layers in the autoencoder ( Current code: No norm )
Replace rectifier in the conv layer with more robust alternatives ( Current code: Leaky RELU )
Adjust learning rate when user changes batch size ( Current code: no LR scaling with batch size )
Explore using other optimizers in autoencoder ( Current code: Adam )
Effective learning rate schedule ( Current code: no adjustment of LR after start or at stagnation )
Initial learning rate ( Current code: 5e-5 )
Use keras.preprocessing.image.ImageDataGenerator ( Current code: random_transform and other )
Threw in a lot, but can add more if anyone ever looks at this. PS - I was looking at 4/5/7/9 with a re-write of the Original_Model