-
Currently, the Supervisor and Selector are separate components. We just boot some form of Selector which continously maintains some form of current dataset. However, this has some problems:
1. The …
-
Hi, thank you for your great work. I was wondering if you have done any of the following experiments.
1. Have you evaluated the few-shot and zero-shot performance of the base ViT model on the CIFAR…
-
I'm very interested in and worship your paper,so i want to reproduce your results. However,i notice that your accuracy is about 2 or 3 percentage higher (acc about GDUMB is lower 3% oppesitely) than t…
-
[Link to the paper](https://link.springer.com/chapter/10.1007/978-3-030-58536-5_31)
-
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin implements incompatible callbacks for template . This may result in errors.…
-
Hi! I'm testing some strategies to reach the accuracy values stated in the original papers.
In particular, Gdumb with mem_size=500 reaches 90% of accuracy in the split-mnist benchmark.
For the param…
-
When running GDumb for MNIST variants where the number of classes remains the same (from 0 to 9) in each experience, accuracy on all old tasks drops significantly with the newer task. I wanted to con…
-
On the CIFAR10 data set, there are a total of 5 tasks, and each task has two classes, so the number of samples in the training set for each task should be 10,000, but why is the number of samples in t…
-
We should add GDumb, using either [their official repo](https://github.com/drimpossible/GDumb) or starting from the ExperienceReplay Method added from @pclucas14
-
`GDumb` does not remove samples when the number of classes increases.