GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.
MIT License
1.54k stars 310 forks source link

permutedMNIST accs #4

Closed Johswald closed 5 years ago

Johswald commented 5 years ago

Hey - thank you for the good implementation of all these methods. Very helpfull. To start a permutedMNIST run, I executed

python main.py --experiment 'permMNIST' --scenario 'task' --tasks 10 --replay=generative --distill --feedback --iters 5000

iters need to be 5000 to get the results reported in the paper, correct?

GMvandeVen commented 5 years ago

Correct, the permuted MNIST results reported in the paper used 5000 iterations per task. I should mention that for permuted MNIST there were also 1000 units in each hidden layer (as opposed to 400 for split MNIST) and the learning rate was 0.0001 (as opposed to 0.001), so to reproduce the results reported in the paper you would also have to add --fc-units=1000 --lr=0.0001.

Johswald commented 5 years ago

sorry, of course. Forgot about that. Thanks!