weightagnostic / weightagnostic.github.io

repo for interactive article
https://arxiv.org/abs/1906.04358
Creative Commons Attribution 4.0 International
456 stars 110 forks source link

Training WANNs on MNIST with backprop #17

Open mixingtime opened 4 years ago

mixingtime commented 4 years ago

Hi,

First of all, thank you so much for the awesome work! I think your idea is very interesting and your presentation is very clear. I really enjoyed reading your paper.

One thing I’m curious about is that for training WANNs on MNIST, you observed that the results of backprop were not as good as those of black-box optimizers (Appendix A.2.4). I also did some experiments with backprop on MNIST. In my experiments, I found that the WANN trained by Adam optimizer was able to achieve 93+% accuracy on the test set. Just out of curiosity, I wonder if you observed similar results in your experiments? Details about my experiments and results can be found here.

Thank you for your time and happy new year!