Closed callenilsson closed 4 years ago
The softmax function specifically uses exponentiation to exacerbate the differences in scores (to get the soft 'max'). You can normalize scores by other means than a softmax.
Related to your title: using a log loss will penalize wrong predictions with high confidence more (e.g. BCE).
Related read is the section entitled "Don’t Mistake Class Probabilities for Confidence" here: https://www.inovex.de/blog/uncertainty-quantification-deep-learning/
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
❓ Questions & Help
I added the line
logits = torch.nn.functional.softmax(logits)
to convert binary classifications to a confidence score between 0.0 - 1.0. However, the predictions are very harsh being really close to either 0.0 or 1.0 and not somewhere in between. Is there a way to penalize the model from being so categorical? I especially want to minimize high confidence scores on false negatives.