issues
search
Adamliu1
/
SNLP_GCW
3
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Willmish/schedule multi gpu
#106
Willmish
opened
5 months ago
1
[CODE] Code modification for accelerate with distributed training
#105
Adamliu1
closed
3 months ago
0
Updates to task-based evaluation.
#104
TheRootOf3
closed
5 months ago
1
Format Python code with psf/black push
#103
github-actions[bot]
closed
5 months ago
0
Format Python code with psf/black push
#102
github-actions[bot]
closed
5 months ago
0
Add gradient checkpointing, comment out mink, add autoselecting paddi…
#101
Willmish
opened
5 months ago
0
Format Python code with psf/black push
#100
github-actions[bot]
closed
5 months ago
0
Format Python code with psf/black push
#99
github-actions[bot]
closed
5 months ago
0
[🚧WIP🚧] Experiments Plan
#98
TheRootOf3
closed
3 months ago
7
Implement squad as retaining dataset.
#97
Davidyz
closed
5 months ago
0
Question answering format for language unlearning French: "Repondre" vs "Reponse"
#96
Willmish
closed
3 months ago
1
Format Python code with psf/black push
#95
github-actions[bot]
closed
5 months ago
0
Willmish/fix beavertails eval
#94
Willmish
closed
5 months ago
1
Implement dataloader constructor for piaf dataset.
#93
Davidyz
closed
5 months ago
0
Implement nq_open as the retaining dataset.
#92
Davidyz
closed
5 months ago
1
beavertails multiple run, majority voting for result consistency
#91
Adamliu1
closed
2 months ago
1
Collect a list of all datasets used
#90
Willmish
closed
3 months ago
3
Updated notes with minor style fixes while awaiting for the team sync.
#89
TheRootOf3
closed
6 months ago
1
Write a brief cluster notes doc
#88
TheRootOf3
opened
6 months ago
3
Added the first version of cluster notes.
#87
TheRootOf3
closed
6 months ago
0
Make sure the lm-eval-harness contains the right stuff
#86
Adamliu1
closed
5 months ago
1
Make sure the unlearning code is unilateral in all repos and Up to date ReadMe
#85
Adamliu1
closed
3 months ago
1
Go through the EMNLP format. Call for Main Conference Papers - EMNLP 2024
#84
Adamliu1
closed
5 months ago
2
Migrate issues from docs!
#83
Willmish
closed
6 months ago
0
Figure out visualisation for comparing multiple unlearning methods across multiple tasks
#82
Willmish
opened
6 months ago
0
Fixed mathqa dataloader for seq and batch unlearning and integrate it…
#81
Davidyz
closed
6 months ago
0
Format Python code with psf/black push
#80
github-actions[bot]
closed
6 months ago
1
feat: add sail/symbolic-instruction-tuning for symbolic inference unlearning
#79
Davidyz
closed
5 months ago
0
STANDARDISE: Make llm unlearning without or with the scheduler a default
#78
TheRootOf3
closed
3 months ago
2
Evaluate base models with beavertails.
#77
TheRootOf3
closed
3 months ago
7
Evaluate base models with lm-evaluation-harness.
#76
TheRootOf3
closed
3 months ago
11
[UNLEARNING+EVALS] Run continuous unlearning for a longer time to see what's gonna happen compared to batch & seq.
#75
TheRootOf3
opened
6 months ago
3
Added cohere harmfuless evaluation script and results.
#74
TheRootOf3
closed
7 months ago
0
Relearn evaluation and relevant results.
#73
Davidyz
closed
6 months ago
1
added results for sequential unlearning sample_count 128 and split 4,…
#72
Adamliu1
closed
8 months ago
0
Added eval results for re-runs of batch unlearning with scaled lr.
#71
TheRootOf3
closed
8 months ago
0
Evaluation results for sequential unlearning with 512 samples.
#70
Davidyz
closed
8 months ago
1
Change output path for relearn eval to follow existing evaluation results.
#69
Davidyz
closed
8 months ago
1
Added seq-1024 split{4,16,64} eval results.
#68
TheRootOf3
closed
8 months ago
3
Added batch-1024 and batch-2048 eval results.
#67
TheRootOf3
closed
8 months ago
1
Adam/batch unlearn results
#66
Adamliu1
closed
8 months ago
0
Format Python code with psf/black push
#65
github-actions[bot]
closed
8 months ago
0
Add relearning evaluation and corresponding results for batch
#64
Davidyz
closed
8 months ago
1
Willmish/batch unlearn evaluation results for batches 2 and 8
#63
Willmish
closed
8 months ago
1
Avoid accessing evaluation.json before creating.
#62
Davidyz
closed
8 months ago
0
lm-eval-harness results for batch unlearning with 512 samples.
#61
Davidyz
closed
8 months ago
2
Log epochs (Important), please read
#60
Willmish
closed
8 months ago
1
Add epoch logging
#59
Willmish
closed
8 months ago
1
Add garbage collecting for CUDA, on batch unlearning.
#58
Willmish
closed
5 months ago
3
Added lm-eval-harness results.
#57
TheRootOf3
closed
8 months ago
0
Previous
Next