Closed hxu38691 closed 4 years ago
Hi, thanks for your interest!
The per-sample loss highly correlates with the correctness of the label and can be modeled with univariate GMM. Dimension reduced representations may not have such an obvious pattern.
In inference the label is not given, and only the network is used for prediction.
I just realized samples are clean...
Thanks, I’m closing this.
Hello, I am new to topic about label noise but very interested in your algorithm, I have two questions in mind if you can help provide some insights into
Why fitting loss to GMM instead of others, such as dimension reduced learnt representations, have you experimented with other settings?
Related to the first question, if using loss as input to GMM, how is the inference done if validation set also contains noisy labels? Can we still separate clean/noisy label without posterior loss?
Thank you