-
https://github.com/aimagelab/mammoth
-
> the perplexity is related to the utilization of the codebook. The higher the perplexity, the better the network utilizes the codebook. Therefore, the increasing perplexity during training is reasona…
-
## 🚀 Feature
A class based dataset sampler for class incremental and continual learning research.
## Motivation
For research in class incremental learning domain, we need datasets to be spl…
-
https://arxiv.org/abs/2112.02706
-
Investigating Continual Pretraining in Large Language Models: Insights and Implications
Examining Forgetting in Continual Pre-training of Aligned Large Language Models
Towards Incremental Learni…
rgtjf updated
6 months ago
-
I would like to add some linear probing tools since it has been a widely used technique in evaluation of continual learning and non continual learning methods. I don't know what is the best way to do …
-
Hi @danielhanchen
I am trying to fine-tune gemma2-2b for my task following the guidelines of the continued finetuning in unsloth. Howver, I am facing OOM while doing so. My intent is to train gemm…
-
Dear author,
I am very interested in your research on AGCN++ and considering applying it in the field of medicine. Could you please share the code with me? I would greatly appreciate your assis…
-
### Title of the talk
Rolling with Python: A Deep Dive into Wheels
### Description
in this talk will go deep-dive into wheels:
- What they are ?
- Why it is needed
- Why each platform (m…
-
**Describe the bug**
I have many entries ignored by citr, revolving around "The name list field author cannot be parsed".
**To Reproduce**
For example it happens for this entry:
@article{Ven…