Open handav opened 5 years ago
Do you mean that the network predicts every image in the test set as neutral?
I do have the same problem, I replace the FER 2013 emotion label with the mostly marked label in FER+, then trained that dataset with MobileNet. I get the model test accuracy with ~80% .
Then I randomly choose multi web downloaded images to run the forward progress. I get most results of neutral while some of the images should be happiness or sad
With the crossentropy model, performing softmax on the last dense layer, every image seems to have neutral as its highest (99%+) value. Is that in line with your results? Any suggestions for neutralizing neutral?