Hi, thanks for sharing your code! Much appreciated.
I'm curious about your implementation of the soft/hard loss by Reed et al. It seems that the fix_output function just copies the label when used for training the model (see https://github.com/udibr/noisy_labels/blob/master/jacob-reed.py#L780). However, the whole idea of the paper by Reed et al. is to use the model's predictions as an extra 'bootstrapped' label, causing the model to train more consistently in the context of noisy labels.
Hi, thanks for sharing your code! Much appreciated.
I'm curious about your implementation of the soft/hard loss by Reed et al. It seems that the
fix_output
function just copies the label when used for training the model (see https://github.com/udibr/noisy_labels/blob/master/jacob-reed.py#L780). However, the whole idea of the paper by Reed et al. is to use the model's predictions as an extra 'bootstrapped' label, causing the model to train more consistently in the context of noisy labels.