Open GreNait opened 4 years ago
I am starting the second model based on the sample on the tensorflow homepage. https://www.tensorflow.org/tutorials/images/classification
Training the second model lead the first time directly to overfitting
Splitting the data helps a lot. Using the ImageDataGenerator for splitting the data into train and validation imporved the model accuracy. The Dropout might also helkp against the overfitting. The second model reached an accuracy for 99,4%
The perfomence difference between the using noise in iomages wasn't that much. Well, at least not in the validation images. A usefull test set up has to be created.
I tried a different model with different layers. The training was longer (some minutes) but the overfitting is dramatic. I am not sure, if this is because of the model layotu or of a different reason.
I changed to a SparseCategoricalCrossentropy (Based on the sample on tensorflow for obect classification with clothes) in hope to get a better prediction of batman and deadpool.
The accuracy was quite high, again, an indication for overfitting.
It might be, that relu is not good for my images. See https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks
I am trying now the sigmoid function. Perhaps that works better.
Using sigmoid instead of relu had a dramatic effect on the training and validation graphs. I am not sure, what this means. Also the accuracy is arround 50%. Whatever that means...
Using a image data generator also for testing the data with model.predict_generator worked, but the values are the same for all images. Appreentyl
A better trained (small) model together with the tf.keras.Sequential([model, tf.keras.layers.Softmax()]) works for the predictions. Even with the smartphone image.
This is the test data directory structure
This is the result using the prediction:
I am hoping, that the 0 is a batman and the 1 a deadpool. Next test is the data in one folder.
In one folder also works