issues
search
InternLM
/
xtuner
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
https://xtuner.readthedocs.io/zh-cn/latest/
Apache License 2.0
4k
stars
314
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
如果其他的依赖项都已经安装好了,怎么直接安装xtuner
#966
hm123450
opened
2 days ago
0
qwen2_attn_forward 中SP的开启条件为什么会加上对是否正在train的判断,Eval阶段不支持么
#965
eedalong
opened
5 days ago
0
xtuner在vs code配置进行debug
#964
PanXiongAdam
opened
5 days ago
1
llava_llama3_8b_full_CLIP_lora训练显存占用异常过大
#963
Yanllan
opened
1 week ago
0
ModuleNotFoundError: No module named 'accelerate'
#962
barisuzar
opened
1 week ago
0
是否支持qwen2.5?
#961
charliedream1
opened
2 weeks ago
2
RuntimeError: "_amp_foreach_non_finite_check_and_unscale_cuda" not implemented for 'BFloat16'这个问题没人解决吗
#960
yang-zhuang
opened
2 weeks ago
1
qwen support transformers>=4.46
#959
HIT-cwh
opened
3 weeks ago
0
pth convert to hf model 出现问题
#958
no-execution
opened
3 weeks ago
0
command error: ''Adafactor is already registered in optimizer at torch.optim''
#957
monteir03
opened
4 weeks ago
4
torch编译错误
#956
tcxia
opened
1 month ago
0
能否支持在服务器离线部署xtuner框架进行微调?
#955
Fanxhion
opened
1 month ago
0
对Minicpm3进行了支持
#954
LDLINGLINGLING
closed
1 month ago
0
怎么设置random seed控制train dpo随机性
#953
pingbowen23
opened
1 month ago
2
xtuner 微调internLM2.5出错
#952
sakura073
opened
1 month ago
9
fix qwen2 tokenizer name
#951
amulil
opened
1 month ago
0
报错[WinError 2] 系统找不到指定的文件。
#950
renlingjie
opened
1 month ago
0
请问如何在微调训练时查看数据
#949
liguoyu666
opened
1 month ago
0
CheckpointHook开启by_epoch时未保存模型
#948
aizhweiwei
opened
1 month ago
1
请问如何支持STF时对不同来源的数据分别画损失?
#947
Abigail61
opened
1 month ago
0
Add functionality to download models from sources other than HuggingFace
#946
starmountain1997
closed
2 weeks ago
0
xtuner如何根据checkpoints继续预训练
#945
aizhweiwei
opened
1 month ago
1
xtuner在超过10000条数据集上运行正常,在1000条数据集上运行失败
#944
tiang2002
opened
1 month ago
0
Does xtuner support DPO for InternVL?
#943
fabriceyhc
opened
1 month ago
1
llava-llama3-8b 微调过程中 loss nan
#942
liboaccn
opened
1 month ago
1
微调基于 InternLM2-7B 的模型时错误:TypeError: Linear4bit.forward() takes 2 positional arguments but 3 were given
#941
AFObject
opened
1 month ago
0
是否支持微调Flux.1 dev
#940
TongrongHuang
opened
1 month ago
0
support for training lamma 3.2 - vision
#939
JAVerma
opened
1 month ago
0
When seq_parallel_world_size is set to a value greater than 1, should use_varlen_attn not be set to true?
#938
Fovercon
opened
2 months ago
1
docker利用xtuner微调时,出错,不知道哪的问题?
#937
159357hou
opened
2 months ago
2
请问目前支持qwen2吗?
#936
Zheng-Jay
opened
2 months ago
3
AttributeError: 'Qwen2FlashAttention2' object has no attribute '_flash_attention_forward'
#935
zhangyuqi-1
opened
2 months ago
2
选择四卡训练卡住
#934
AlittlePIE
opened
2 months ago
1
intern2.5-20B微调 后词表长度不一致
#933
topology1
opened
2 months ago
0
使用lengthgroupedsampler代替原本的sampler后卡死
#932
xcy9614
opened
2 months ago
0
[Fix] Fix OOM when qlora converting
#931
fanqiNO1
opened
2 months ago
0
[Bugs] fix qlora convert bugs
#930
HIT-cwh
closed
1 month ago
0
如何进行val和test?
#929
Diyigelieren
opened
2 months ago
0
version `GLIBCXX_3.4.29' not found
#928
amannier
opened
2 months ago
0
Failed to inference single image using xtuner chat with llava-llama3-8b model
#927
J0eky
closed
2 months ago
1
奖励模型问题
#926
Eren139
opened
2 months ago
1
transformers == 4.44.2 xtuner == 0.1.23 训练 qwen2 时报错
#925
thomZ1
opened
2 months ago
2
多机多卡训练报错ss1.ss_family == ss2.ss_family. 2 vs 10
#924
sph116
opened
2 months ago
0
请问与 llamaFactory 的训练 TGS 对比时的具体实验条件
#923
shihanmax
opened
2 months ago
0
报错Cannot find reference 'VarlenAttnArgsToMessageHubHook' in 'init.py'
#922
hutiehua-1
opened
2 months ago
1
有个疑问,计算Loss的时候并不是以reward_token_id最终loss计算的,为什么推理的时候可以以reward_token_id为准呢?
#921
woshixiaobai2019
opened
2 months ago
6
QwenVL支持
#920
liyan1997
opened
2 months ago
0
整合Liger Kernel: 最高效的Triton Training Kernels
#919
ByronHsu
opened
2 months ago
0
一些关于步数统计的疑问
#918
young-chao
opened
2 months ago
0
add rescale sp loss
#917
HIT-cwh
opened
2 months ago
0
Next