PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.
Hi!
I recently tried to run this open source code and have some questions about the "EVALUTION" part of the experimental printing. When using None ("lower target") for experiments, why in the"Accuracy of final model on test-set" part of the output, the accuracy of the model for Context 1-Context 4 is all 0.0000? Even if the model has not been trained, the accuracy should not be that low, right? Is there a restriction in the code that does not output the test results of model Context 1-Context 4?
Hi! I recently tried to run this open source code and have some questions about the "EVALUTION" part of the experimental printing. When using None ("lower target") for experiments, why in the"Accuracy of final model on test-set" part of the output, the accuracy of the model for Context 1-Context 4 is all 0.0000? Even if the model has not been trained, the accuracy should not be that low, right? Is there a restriction in the code that does not output the test results of model Context 1-Context 4?