Closed kktsubota closed 6 years ago
In our released code, we choose these hyper-parameters for a better performance. Note that the core in DAC is not changed, just some hyper-parameters. These choices can achieve better performance on MNIST, i.e., from 0.9660 to 0.98 (ACC).
Thank you for updating codes.
I have a question. The settings in your codes are not equal to those in your paper. (e.g. batch size in your paper is 32 while that in your MNIST codes is 128, network architectures, the threshold for deciding target labels, the loss function in your paper is binary cross entropy while that in your code is mean squared error) Which settings did you use in your experiments?
Thank you.