# If you dont have a pre-trained baseline model then use this
channel_weights = (
np.array([[np.log(1. - NOISE_LEVEL)
if i == j else
np.log(0.46 / (nb_classes - 1.))
for j in range(nb_classes)] for i in
range(nb_classes)])
+ 0.01 * np.random.random((nb_classes, nb_classes)))
1
Here is a significant decline on the final baseline accuracy and I can't find the reason.
Any suggestion for it? Here is my jupyter notebook result
2
And I don't understand the reason why there exist 0.01 * np.random.random((nb_classes, nb_classes)) in the above expression?
Hello Udibr, Here is another question I want to consult for your kindly help. I replaced the following code as suggested in mnist-simple.ipynb:
Before with baseline accuracy=0.98:
After with baseline accuracy=0.78:
1
Here is a significant decline on the final baseline accuracy and I can't find the reason. Any suggestion for it? Here is my jupyter notebook result
2
And I don't understand the reason why there exist
0.01 * np.random.random((nb_classes, nb_classes))
in the above expression?Thanks for your help:)