issues
search
locuslab
/
tofu
Landing Page for TOFU
MIT License
83
stars
18
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Which dataset should we use for evaluate?
#43
Yuda-Jin
opened
2 weeks ago
1
RuntimeError: 'weight' must be 2-D During Fine-Tuning with Single GPU
#42
ouerwt
opened
2 weeks ago
0
Can not find 'adapter_config.json' in ckpt or huggingface
#41
Yuda-Jin
closed
1 week ago
0
Raise error while evaluate.
#40
Yuda-Jin
closed
1 month ago
2
Could you please provide finetuned model weight of phi-1.5 and lamma2, this will unify the basis of our research.
#39
Yuda-Jin
closed
1 month ago
1
DeepSpeed Zero-3 is not compatible with `low_cpu_mem_usage=True` or with passing a `device_map`
#38
zhmzm
opened
1 month ago
0
Why finetuned model and retained model have similar model utility?
#37
Carol-gutianle
closed
3 months ago
2
About the deepspeed
#36
LetheSec
closed
3 months ago
0
Error loading Phi Finetuned model
#35
pomonam
closed
2 months ago
2
Added support for LLaMa3
#34
mikeFore4
closed
4 months ago
1
Inquiry about Constructing Datasets with Elaborate Prompt
#33
tbozhong
opened
4 months ago
0
Inconsistent number of forget samples when evaluating the retain model (forget10 task)
#32
LetheSec
closed
4 months ago
2
Breaking change in Huggingface Phi-1-5?
#31
ajyl
closed
5 months ago
1
Bug in calculating loss when using DPO
#30
zeta-zl
closed
5 months ago
1
Trying to get model parallelism and lower precision working
#29
molereddy
closed
5 months ago
0
Finetuning with LORA causes DeepSpeed error
#28
mikeFore4
closed
4 months ago
1
Fixing order of directory creation for forget.py to prevent exiting
#27
mikeFore4
closed
4 months ago
1
The implementation of Truth Ratio and Probability is different from the definition in the paper
#26
wzunknown
opened
6 months ago
13
Support for different num_processes in interleave_eval_result_dict
#25
molereddy
closed
6 months ago
1
question about retain_perturbed.json in datasets locuslab/TOFU
#24
wtma1999
opened
6 months ago
1
TOFU-finetuned Phi-1.5 is not on the huggingface page
#23
molereddy
closed
5 months ago
1
The huggingface leaderboard page is showing Runtime Error
#22
wzunknown
opened
6 months ago
0
changed bf16 to fp16 and fixed some model paths
#21
akshayneema
closed
6 months ago
0
One of the inputs missing for DPO loss
#20
chrisliu298
opened
6 months ago
4
Getting truth ratio always 1
#19
shaswati1
opened
6 months ago
13
Dataset contents issues
#18
molereddy
opened
6 months ago
4
Eval log file limiting examples
#17
molereddy
closed
4 months ago
1
eval generates answer same as dataset
#16
shaswati1
opened
6 months ago
9
Refactor eval
#15
zhilif
closed
6 months ago
0
Issues introduced by refactoring and other miscellaneous
#14
molereddy
opened
6 months ago
9
Refactor eval
#13
pratyushmaini
closed
6 months ago
0
End to generated text
#12
molereddy
closed
6 months ago
5
Hyperparameter issues in configs
#11
molereddy
closed
6 months ago
3
Unable to train fintuned LoRA on forget
#10
shaswati1
opened
6 months ago
0
Where is eval_log_aggregated.json generated?
#9
molereddy
closed
7 months ago
1
Issues with deepspeed
#8
molereddy
closed
7 months ago
3
Unable to save finetuned llama2
#7
shaswati1
closed
7 months ago
8
requirements.txt needs a fix
#6
molereddy
closed
7 months ago
2
Finetune LLAMA2 with LoRA
#5
petezone
closed
7 months ago
1
Is anyone getting a problem with the command for forget.py?
#4
sriramvema
closed
7 months ago
3
Where are the evals inside the data folder being generated?
#3
rthapa84
closed
6 months ago
5
Command for evaluations
#2
yujianll
closed
6 months ago
2
Unable to load the dataset from HuggingFace hub, throws a ValueError
#1
archit31uniyal
closed
8 months ago
1