We can now select our optimizer, configure each optimizer, we have conversion to FP16 (where possible) as an option which improves memory usage during training at the very least. We can now perform image augmentation during training, control label smoothing, etc.
We also automatically and always do early stopping once validation accuracy does not improve, and since I'm an old man with the old memes, I've set the training target epochs to 9001 in the v2 training scripts because early stopping will actually control this.
Basically I brought over a bunch of configuration stuff from automl and merged them in, fulfilling a bunch of TODOs in the original training code that was brought in from tfhub.
Unfortunately, all these new shiny tools didn't improve overall accuracy on newly trained models. Loss was significantly lower however. I'll attach newly trained mobilenet model to this PR later.
We can now select our optimizer, configure each optimizer, we have conversion to FP16 (where possible) as an option which improves memory usage during training at the very least. We can now perform image augmentation during training, control label smoothing, etc.
We also automatically and always do early stopping once validation accuracy does not improve, and since I'm an old man with the old memes, I've set the training target epochs to 9001 in the v2 training scripts because early stopping will actually control this.
Basically I brought over a bunch of configuration stuff from automl and merged them in, fulfilling a bunch of TODOs in the original training code that was brought in from tfhub.
Unfortunately, all these new shiny tools didn't improve overall accuracy on newly trained models. Loss was significantly lower however. I'll attach newly trained mobilenet model to this PR later.