Closed xu-ji closed 4 years ago
I’m very sorry for the late response! Yes, you are right that so far the successful applications of generative replay have only been on datasets with relatively simple inputs (e.g., MNIST). Based on my own attempts (as well as based on informal discussions with others), scaling up generative replay to problems with more complex inputs (e.g., natural images) turns out not to be straight-forward. So you are probably right that there is something about the combination of more complex inputs + the continual training of the generator that makes it difficult. I hope to soon share my attempts of scaling up generative replay.
First of all, thank you for releasing this code.
Do you know of any results for generative replay (i.e. where images from across tasks are generated) on datasets more complicated than MNIST? For example CIFAR or ImageNet.
It seems your feedback connections paper, three scenarios for continual learning paper and the original deep generative replay paper only test on MNIST. Did you try any other datasets? Do you think there is something about the combination of more complex natural images + the continual training of the generator that makes it difficult? Because surely someone has tried.