Open adriandavidauer opened 3 years ago
it will predict right. The "optimizer" is only necessary for "training", and it's state will only help select the proper learning rate and weight updates. Even with a fresh new optimizer, it will be possible to train if you manually adjust its properties to reasonable amounts. A fresh new optimizer may have a hard time at the beginning of the training, but after a few epochs it will very probably find its way.
if the dimensions are somehow wrong, it should look weird in the visualization of a sample or at least different to waht we know from the dataset
Tested on the pretrained faceImage model on the testset and it performs poor! -> something must be wrong with the model!!! All the other models perform as excpeted!
Accuracy for /home/alubitz/anaconda3/envs/vvadlrs3/lib/python3.7/site-packages/vvadlrs3/bestFaceEndToEnd.h5 is 0.5
MAE for /home/alubitz/anaconda3/envs/vvadlrs3/lib/python3.7/site-packages/vvadlrs3/bestFaceEndToEnd.h5 is 0.5019764296190415 with std 0.4942703556715032
FaceImages are the only ones that are being resized. Maybe something goes wrong there... The model should be fine. I still have some backup models with 94% validation accuracy and there it is the same problem...
Next I should compare how samples for training are resized and how they are resized for testing
Tests on a random video of me are poor. The pretrained models for the different featureTypes produce very different results. And all of them seem to be rather bad. Have to label a video myself for testing it a little better but first impression is bad!
The classifications are obviously wrong on the video and live demo. Causes could be the following:
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.