issues
search
git-cloner
/
llama2-lora-fine-tuning
llama2 finetuning with deepspeed and lora
https://gitclone.com/aiit/chat/
MIT License
155
stars
14
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
deepspeed这里是使用zero-1/2/3哪个模式切片的?
#15
rechawine
opened
3 months ago
0
ImportError: cannot import name 'import_path' from '_pytest.doctest'
#14
boolmriver
opened
3 months ago
1
为什么generate里面使用base model的tokenizer
#13
kunzeng-ch
opened
5 months ago
0
ValueError: Attention mask should be of size (4, 1, 240, 480), but is torch.Size([4, 1, 240, 240])
#12
LiBinNLP
opened
6 months ago
3
decoder输出长度是有限制吗?
#11
MarsMeng1994
opened
8 months ago
5
stream_output 是不是没用到呀
#10
MarsMeng1994
opened
8 months ago
1
MultiGPU+Deepspeed+4bitQlora
#9
yaoching0
opened
10 months ago
1
===================================BUG REPORT===================================
#8
backatonesJ
closed
10 months ago
0
pip install git+https://github.com/huggingface/transformers -i https://pypi.mirrors.ustc.edu.cn/simple”这步报错了
#7
backatonesJ
closed
10 months ago
1
可以支持多机多卡吗,如何配置支持deepspeed的stage2,stage3
#6
yangzhipeng1108
closed
10 months ago
0
可以支持多机多卡吗
#5
yangzhipeng1108
closed
10 months ago
2
validation_files
#4
wchengyu
opened
10 months ago
1
llama2-13B和llama2-70b所需要的显卡配置
#3
batindfa
opened
11 months ago
1
请教关于微调
#2
goog
closed
11 months ago
5
请问扩充中文词表的作用是什么呀
#1
goog
closed
11 months ago
2