I stumbled on your repo here since we're building our own alternative to backprop at aolabs.ai, using weightless neural networks.
Unless I'm missing something, MNIST has only 70k samples total (60k training, 10k testing), so I'm not sure how you have 300,000 samples in your Top1-Accuracy vs Samples plot. Did you use EMNIST?
I'd love to show you what we've cooked up applying our WNNs to MNIST, too. Seems like we're much more sample efficient. Let's chat some time!
Hi LP!
I stumbled on your repo here since we're building our own alternative to backprop at aolabs.ai, using weightless neural networks.
Unless I'm missing something, MNIST has only 70k samples total (60k training, 10k testing), so I'm not sure how you have 300,000 samples in your Top1-Accuracy vs Samples plot. Did you use EMNIST?
I'd love to show you what we've cooked up applying our WNNs to MNIST, too. Seems like we're much more sample efficient. Let's chat some time!