Closed kevinlemon closed 6 years ago
Thanks Kevin.
Based on the paper I have implemented the authors report OpenMax performing better than Softmax. My implementation also supports the claim. The paper was from 2014. I heard they have now better methods than Openmax. Take a look at the following page for the researcher
https://www.wjscheirer.com/projects/openset-recognition/
If the code in this repo doesn't work. Let me know.
I am sorry for the late reply. However, In the “Softmax.ipynb” the OpenMax behaves actually worse than the softmax. For example: In MNIST test. The first test image, the actual label is 0, the openmax predicts 9; the third image, the actual label is 7, the openmax predicts 2; the fifth image, the actual label is 0, the openmax predicts 9; whereas the softmax predicts correctly for all the above images.
Thanks for pointing that out. Since we are just picking out random 5 images that the openmax just predicted correctly 3 out of 5 samples. But when I ran it through the whole test set, it accuracy was about 95%. Look at the Chinese dataset example, how openmax correctly classified those characters as unknown where softmax was giving wrong predictions. This was what openmax was designed to do: detect distributions that are not within the range of training distributions.
Many thanks !
Thank you for your work. I am a little confused why it seems that the Openmax behaves worse than the Softmax.