-
we encounter forgetting after a couple of epochs.
we noticed, that the number of spikes keep increasing significantly in every layer (recurrent and readout) for every new epoch.
that could have two …
-
Hi dear:
Thanks for your open source, but when i finetuned (whatever full parameters or LoRa ) on my dataset, catastrophic forgetting kept coming up (decrease in performance on the humaneval), i d…
-
# URL
- https://arxiv.org/abs/1612.00796
# Affiliations
- James Kirkpatrick, N/A
- Razvan Pascanu, N/A
- Neil Rabinowitz, N/A
- Joel Veness, N/A
- Guillaume Desjardins, N/A
- Andrei A. Rus…
-
Thanks for your work, i think your work is a milestone work in lifelong machine learning.
But i run your code, the program does not seem to be running correctly.
![image](https://user-images.githubu…
ghost updated
9 months ago
-
Catastrophic forgets relates to having the model "forget" about things it hasn't recently seen training data about. Eg, we master one scenario and focus on another without continuing to train on/'prac…
-
Hi, dear:
Thanks for your open source!
How did you overcome the catastrophic forgetting problem in lora finetune.
The performance dropped a lot on humaneval dataset after lora finetune on my own …
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
https://arxiv.org/abs/1612.00796
TMats updated
6 years ago
-
The [R-Hero experiment](http://arxiv.org/pdf/2012.06824) has uncovered the existence of catastrophic forgetting problem for patch generation.
We will study how to overcome this problem.
(spinoff…
-
Hello,
First of all, thank you for your great work.
I am currently conducting research based on your benchmark, and I have a question about the experimental setup for Table 3 in Section 4.5: "Ed…