Open BeepBump opened 6 years ago
From Table4 of the paper, the result is 6% and 12%, which corresponds to 6.13% and 12.20% of your test result.
I have another question, in saab_compact.py
bias = LA.norm(sample_patches, axis=1)
bias = np.max(bias)
pca_params['Layer_%d/bias' % i] = bias
# Add bias
sample_patches_centered_w_bias = sample_patches_centered + 1 / np.sqrt(num_channels) * bias
# Transform to get data for the next stage
transformed = np.matmul(sample_patches_centered_w_bias, np.transpose(kernels))
# Remove bias
e = np.zeros((1, kernels.shape[0]))
e[0, 0] = 1
transformed -= bias * e
When I read the paper, I thought saab may do inner product first, then add the bias to prevent negative responses. However, why the code seems to add bias before inner product then substract bias?
Sorry for bothering, when I run FF/FGS and FF/BIM for test accuracy, I got the values different from the paper. (FF/FGS: 6.13% and FF/BIM: 12.20%)
ps. I use the command below python cifar_keras.py -train_dir cifar_ff_model -filename FF_init_model.ckpt -method BIM/FGSM