ContinualAI / avalanche

Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
http://avalanche.continualai.org
MIT License
1.79k stars 291 forks source link

CORe50 Dataset and Benchmark Improvements #547

Open vlomonaco opened 3 years ago

vlomonaco commented 3 years ago

We should:

tachyonicClock commented 3 years ago

Additionally I would like to see a flag for using a subsampled test set. I've found Core50 test set to be needlessly large. In fact the Core50 authors end up sub-sampling it in later works.

"we sub-sampled the test set by selecting 1 frame every second (from the original 20 fps)"
Lomonaco, V., Maltoni, D., & Pellegrini, L. (2020). Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches. ArXiv:1907.03799 [Cs, Stat]. http://arxiv.org/abs/1907.03799

Additionally some sort of meta-loader that wraps any benchmark allowing sampling to be applied to any dataset might be useful.