I was just reading the paper and the code and it seems that this code may not be able to catch up with the accuracy presented in the paper.
I wonder if it is possible to replace the classification result from choosing the largest log-probability to: if the probability of Unknown(softmaxed) > (#Unknown Samples/ #Known Samples) then choose Unknown as output; else choose the largest log-probability of the known classes as the output.
For example, if we get classes [0, 1, 2, Unknown] (with 1000 samples of 0,1,2 and 3000 samples of Unknown in the test set) and their probabilities are [0.1, 0.2, 0.3, 0.4]. In this code, we would choose Unknown as our result. Instead, as the probability for Unknown is 0.4, which is lower than 0.5 (3000/6000), we should choose class 2 as our result.
I wonder if this could improve the classification accuracy.
I was just reading the paper and the code and it seems that this code may not be able to catch up with the accuracy presented in the paper. I wonder if it is possible to replace the classification result from choosing the largest log-probability to: if the probability of Unknown(softmaxed) > (#Unknown Samples/ #Known Samples) then choose Unknown as output; else choose the largest log-probability of the known classes as the output. For example, if we get classes [0, 1, 2, Unknown] (with 1000 samples of 0,1,2 and 3000 samples of Unknown in the test set) and their probabilities are [0.1, 0.2, 0.3, 0.4]. In this code, we would choose Unknown as our result. Instead, as the probability for Unknown is 0.4, which is lower than 0.5 (3000/6000), we should choose class 2 as our result. I wonder if this could improve the classification accuracy.