An unofficial pytorch implementation of "Large Scale Incremental Learning" from https://arxiv.org/abs/1905.13260
Download Cifar100 dataset from https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
Put meta, train, test into ./cifar100
python main.py
20 | 40 | 60 | 80 | 100 | |
---|---|---|---|---|---|
Paper | 85.20 | 74.59 | 66.76 | 60.14 | 55.55 |
Implementation | 83.80 | 68.75 | 63.50 | 58.25 | 54.93 |
20 | 40 | 60 | 80 | 100 | |
---|---|---|---|---|---|
Alpha | 1.0 | 0.788 | 0.718 | 0.700 | 0.696 |
Beta | 0.0 | -0.289 | -0.310 | -0.325 | -0.327 |
20 | 40 | 60 | 80 | 100 | |
---|---|---|---|---|---|
Alpha | 1.0 | 1.006 | 1.017 | 0.976 | 0.983 |
Beta | 0.0 | -2.809 | -3.496 | -3.447 | -3.683 |
Different Optimizers make difference in alpha and beta.