udibr / noisy_labels

TRAINING DEEP NEURAL-NETWORKS USING A NOISE ADAPTATION LAYER
118 stars 38 forks source link

a decline on the final baseline accuracy with the alternate channel_weights #3

Open GuokaiLiu opened 6 years ago

GuokaiLiu commented 6 years ago

Hello Udibr, Here is another question I want to consult for your kindly help. I replaced the following code as suggested in mnist-simple.ipynb:

Before with baseline accuracy=0.98:

channel_weights = baseline_confusion.copy()
channel_weights /= channel_weights.sum(axis=1, keepdims=True)
# perm_bias_weights[prediction,noisy_label] = log(P(noisy_label|prediction))
channel_weights = np.log(channel_weights + 1e-8)

After with baseline accuracy=0.78:

# If you dont have a pre-trained baseline model then use this
channel_weights = (
    np.array([[np.log(1. - NOISE_LEVEL)
                        if i == j else
                        np.log(0.46 / (nb_classes - 1.))
                        for j in range(nb_classes)] for i in
              range(nb_classes)])
    + 0.01 * np.random.random((nb_classes, nb_classes)))

1

Here is a significant decline on the final baseline accuracy and I can't find the reason. Any suggestion for it? Here is my jupyter notebook result

2

And I don't understand the reason why there exist 0.01 * np.random.random((nb_classes, nb_classes)) in the above expression?

Thanks for your help:)

Billy1900 commented 3 years ago
  1. it seems like you have solved your problem?
  2. maybe a way to initialize?
  3. And I have updated a PyTorch version, welcome to correct my issues: https://github.com/Billy1900/Noise-Adaption-Layer