nayeemrizve / ups

"In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning" by Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, Mubarak Shah (ICLR 2021)
MIT License
229 stars 40 forks source link

About the relationship between prediction uncertainty and expected calibration error (ECE) #6

Closed chengjianhong closed 3 years ago

chengjianhong commented 3 years ago

Hi, can you tell me how to do draw Fig.1(a)? I know a sample has an uncertainty for a model. But how to calculate the ECE for a sample? I understand that the ECE is for the whole data.

nayeemrizve commented 3 years ago

Hello, thank you for expressing interest in our work. For drawing Figure 1(a), we have selected subsets with different uncertainty thresholds (x-axis) and computed the ECE score for that particular subset (y-axis).