Open Mukulareddy opened 4 years ago
@Mukulareddy The few-shot class-incremental learning (FSCIL) problem is originated from a real-world scenario. When developing AI agents, we usually train a model of good recognition performance on a large-scale database (e.g., ImageNet 1000 classes). But when put the agents in real applications, it may meet unseen new classes (e.g., 'chicken curry', not in the 1000 base classes), and the developer has to annotate a few training samples of these new classes and update the model as quickly as possible. Thus, the FSCIL problem has the following characteristics:
(1) The base class training set is a large-scale dataset. (2) The new class training set has very few training samples. (3) When learning new classes, the base head is expanded to support (base + new)-way classification.
In your scenario, the base model is designed for animal classification. Initially, we prepare a large scale animal database, which contains many annotated images (e.g., 1000 images per class) of C categories for training the base model. Then, we find there are 2 new classes (e.g., horse, cow) not being considered. So we collect a few training samples (e.g., 5 images per new class) and incrementally train the base model on these new samples. The resulting model can classify (C + 2) classes, despite the classes have highly imbalanced number of training samples and not trained at once.
The work really looks great. But only problem is to apply the same method on our own data for the task of classification.Unable to understand the data preparation part for the training. It would be very helpful if you could take a new dataset(lets say 2 classes(dog,cat)) and then incrementally train the network for another 2 classes(like horse,cow). Now the resulting model should be able to classify all 4 classes. Looking forward for your reply in this context.