GMvandeVen / class-incremental-learning

PyTorch implementation of a VAE-based generative classifier, as well as other class-incremental learning methods that do not store data (DGR, BI-R, EWC, SI, CWR, CWR+, AR1, the "labels trick", SLDA).
MIT License
73 stars 14 forks source link

Question regarding the online incremental learning #2

Closed limzh123 closed 1 year ago

limzh123 commented 2 years ago

Hi,

Nice to meet you. May I know is this code is applicable to online incremental learning? Because after i read your paper, the paper mentions online incremental learning.

Thanks if you could assist me :D

GMvandeVen commented 2 years ago

Hi limzh123, Thanks for your interest in my code! As described in the paper accompanying this code, the proposed method ‘Generative Classifier’ does not rely on the presence of task-boundaries and is therefore applicable to ‘task-free continual learning’ (which sometimes is also referred to as ‘online continual learning’). However, as most of the methods that I compare against do rely on such task-boundaries, the code in this repository does actually use task-boundaries (i.e., it uses the task-based setting). If you are interested, here (https://github.com/GMvandeVen/continual-learning#more-flexible-task-free-continual-learning-experiments) you can find some code supporting continual learning experiments with ‘task-free’ data streams. Hope this helps!

PS: with ‘online continual learning’, sometimes also additional assumptions are made that each training samples should only be presented once and that the mini-batch size should be one (see also Appendix B of the paper accompanying this code). These assumptions can be introduced in the code of this repository using the options --single-epochs (to present each training sample only once) and --batch=1 (to set the mini-batch size to one). The experiments in the paper on CORe50 follow this more strict setup.