Adamliu1 / SNLP_GCW

3 stars 0 forks source link

[EXPERIMENTS] Sequential unlearning llama 3 8B #125

Open TheRootOf3 opened 3 months ago

TheRootOf3 commented 3 months ago

Given the continuous approach did not give us cool results (either did not change model performance or unlearned everything), we should try to see if sequential is any better.

(what epoch count, what lr?) Suggested: leave epoch count at 20, grab one of the “edge” LRs

Hypothesis:

  1. Sequential unlearning unlearns faster with less samples.
  2. Sequential unlearning is better at preserving desired capabilities while decreasing the targetted one.