davidsonic / Interpretable_CNN

This repository is deprecated, please go to https://github.com/davidsonic/Interpretable_CNNs_via_Feedforward_Design
15 stars 5 forks source link

run FF/FGSM and FF/BIM test accuracy #1

Open BeepBump opened 6 years ago

BeepBump commented 6 years ago

Sorry for bothering, when I run FF/FGS and FF/BIM for test accuracy, I got the values different from the paper. (FF/FGS: 6.13% and FF/BIM: 12.20%)

ps. I use the command below python cifar_keras.py -train_dir cifar_ff_model -filename FF_init_model.ckpt -method BIM/FGSM

davidsonic commented 6 years ago

image From Table4 of the paper, the result is 6% and 12%, which corresponds to 6.13% and 12.20% of your test result.

BeepBump commented 5 years ago

I have another question, in saab_compact.py

Compute bias term

        bias = LA.norm(sample_patches, axis=1)
        bias = np.max(bias)
        pca_params['Layer_%d/bias' % i] = bias
        # Add bias
        sample_patches_centered_w_bias = sample_patches_centered + 1 / np.sqrt(num_channels) * bias
        # Transform to get data for the next stage
        transformed = np.matmul(sample_patches_centered_w_bias, np.transpose(kernels))
        # Remove bias
        e = np.zeros((1, kernels.shape[0]))
        e[0, 0] = 1
        transformed -= bias * e

When I read the paper, I thought saab may do inner product first, then add the bias to prevent negative responses. However, why the code seems to add bias before inner product then substract bias?