Closed chart21 closed 1 year ago
I think this is an open question. The code runs the reduced network as defined in FALCON, but I haven't achieved better results with the full network. Our paper shows that there is a gap also with smaller networks, see Figure 10. The gap is smaller there, but so is the overall error rate (2% gap with ~10% overall error).
There is no option to change the arguments outside the script.
Hi, thanks a lot for the great project! How do I run AlexNet correctly so it achieves high accuracy on Cifar10? On my local Pytorch implementation I achieve 83% accuracy with Alexnet using SGD, momentum=0.9, learning_rate=0.001 after 20 epochs. 67% accuracy already after 5 epochs.
I tried to simulate that by modifying the code in falcon_alex.mpc:
Then I ran:
emul alex prob 46 20 32
I am getting only the following results:
train_acc: 0.579883 (29022/50048) test loss: 4.26651 acc: 0.5651 (5651/10000)
In earlier tests, I got the following results without modifying the source code and using adamapprox:
emul alex prob 46 20 32 adamapprox
train_acc: 0.815377 (40808/50048) test loss: 6.72499 acc: 0.6548 (6548/10000)
It can't be all from rounding errors due to using prob and float precision of 32, right?
What are the exact command line args to include batchnorm, dropout, momentum, learning_rate without modifying the script?