issues
search
mark1879
/
Baichuan-13B-Finetuning
Baichuan-13B 指令微调
89
stars
9
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
AssertionError: Provided path (/LLaMA-Efficient-Tuning/baichuan-13b-chat-dsl-sft) does not contain a LoRA weight.
#24
Eternal-Yan
opened
5 months ago
1
RuntimeError: expected scalar type Half but found Float
#23
qppwdd0324
opened
6 months ago
0
感谢您的付出,这是一个好的项目!请问,微调完后,如何将测试得答案也同步输出,现在只有bleu得结果。代码需要在哪改呢
#22
qppwdd0324
closed
6 months ago
0
File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 212, in _unscale_grads_ raise ValueError("Attempting to unscale FP16 gradients.") ValueError: Attempting to unscale FP16 gradients.
#21
herexk
opened
7 months ago
1
如何训练多个微调任务
#20
tianshuo1
opened
1 year ago
0
单机多卡训练
#19
tomorrow-zy
opened
1 year ago
0
给定文档微调100轮之后,损失有明显下降趋势。但是对微调后的模型按照文档内容问答还是输出其他不相关内容
#17
reilxlx
closed
4 months ago
0
chat.sh
#16
sunshineflg
opened
1 year ago
0
Baichuan-13B-Finetuning这个项目不能在单机多卡上跑吗
#15
wannianlou
opened
1 year ago
2
cli_demo.py中model.chat方法在哪里定义的?
#14
LittleYouEr
opened
1 year ago
0
QLora 微调后,如何int4量化部署?
#13
maxisheng
opened
1 year ago
0
按照教程来的 eval的时候报错 ValueError: Expected input batch_size (11) to match target batch_size (251).
#12
Zhang-star-master
opened
1 year ago
5
quantization_bit=8 训练后推理报错
#11
q1051738725
opened
1 year ago
0
报错 NotImplementedError 这个是什么问题?
#10
Zhang-star-master
opened
1 year ago
1
loss不下降
#9
QJBX-DJN
opened
1 year ago
2
RuntimeError: expected scalar type Half but found Float
#8
Linjiahua
opened
1 year ago
1
更新后48g显存都爆
#7
away-star
opened
1 year ago
0
请问复旦moss的多轮对话数据集应该怎样用这个代码进行微调呢,数据格式不一样会影响嘛
#6
sichehu
closed
1 year ago
0
请问fully fine-tune有遇到过overflow的问题吗?
#5
mynewstart
opened
1 year ago
3
batch size的选择
#4
xxm1668
opened
1 year ago
1
python finetune_lora.py 时报错。麻烦看下问题。
#3
tiaotiaosong
closed
1 year ago
6
训练数据格式
#2
huangqingyi-code
opened
1 year ago
2
您好,可以给出Qlora微调稍微详细的教程吗
#1
zhangyunming
opened
1 year ago
1