facebookresearch / agem

Official implementation of the Averaged Gradient Episodic Memory (A-GEM) in Tensorflow
MIT License
201 stars 38 forks source link

Is it R-walk code? #3

Open dustlrdk opened 5 years ago

dustlrdk commented 5 years ago

I'm trying to recreate the results of cifar100 and MNIST in r-walk paper, but it doesn't work. To compare it to your code, i open this Issue.

This repo is based on A-GEM paper(1-epoch setting suggested in GEM) not R-walk.

How do i get the code based on R-walk setting(single/multi-head incremental setting, simple 4-layer Network)?

And there is no reference result(Multi-task learning) in r-walk paper. So, i can't calculate F, I score. I've already learned one(Multi-task learning). Can I use this? Well, isn't it unstable?

arslan-chaudhry commented 5 years ago

To reproduce the results of RWalk paper you have to modify the code on your own. This repository provides a reference implementation of RWalk. To run multi-epochs and single-head experiments please edit the code accordingly. Most of the code is already in there, you just need to add flags and make correct function calls. A reference implementation of smaller CNN is also given.

dustlrdk commented 5 years ago

Okay, Could you tell me more detailed setting?(like epoch, Data augmentation...)

Thank you for your answer.

arslan-chaudhry commented 5 years ago

Please refer to the paper.

yunfei-teng commented 4 years ago

Hi, I have a similar question regarding to RWALK and PI. I could not find the details of implementation (e.g. number of epochs for training) in your honorable RWALK paper. I used the codes provided by the authors of PI and I found the performance was slightly worse than reported one. May I ask about more training details please?