issues
search
git-cloner
/
llama-lora-fine-tuning
llama fine-tuning with lora
https://gitclone.com/aiit/chat/
MIT License
136
stars
14
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Replace removed function prepare_model_for_int8_training
#16
yana9i
closed
6 months ago
0
About the result of lora fine-tuning
#15
lucasliunju
opened
1 year ago
2
ModuleNotFoundError: No module named 'utils.prompter'
#14
cyrilakafia
opened
1 year ago
1
Run deepspeed fastchat/train/train_lora.py error. Padding Error
#13
cyrilakafia
opened
1 year ago
1
Expected a cuda device, but got: cpu
#12
mvuthegoat
opened
1 year ago
4
RuntimeError:CUDA error : out of memory
#11
Hzzhang-nlp
opened
1 year ago
4
Do the fine tuning and seting --model_max_length 2048 issue
#10
JustinZou1
opened
1 year ago
1
Could you please provide the code for merging the generated output file into the original model?
#9
codezealot
opened
1 year ago
1
training stuck
#8
mz2sj
opened
1 year ago
1
down llama model so slowly
#7
mz2sj
closed
1 year ago
2
error: subprocess-exited-with-error
#6
Hzzhang-nlp
opened
1 year ago
5
How to resume correct
#5
ycat3
closed
1 year ago
1
run deepspeed fastchat/train/train_lora.py error
#4
codezealot
closed
1 year ago
5
sequence length is longer than the specified maximum sequence length
#3
ycat3
closed
1 year ago
1
3.3.3 Html to Markdown never finish.
#2
ycat3
closed
1 year ago
2
do support multi-gpus?
#1
LARRYMIN
closed
1 year ago
3