issues
search
zhangfaen
/
finetune-Qwen2-VL
MIT License
208
stars
20
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
ImportError: FlashAttention2 has been toggled
#20
nobug-code
opened
3 days ago
0
Finetune Qwen2-VL-7B-Instruct
#19
kiennt120
opened
1 week ago
3
Support Pre-training.
#18
WangRongsheng
closed
2 weeks ago
0
loading model after fine tuning
#17
zarif98sjs
closed
2 weeks ago
0
No module named 'flash_attn_2_cuda'
#16
zarif98sjs
closed
2 weeks ago
3
请问下您有碰到过微调7B模型出现OOM吗?
#15
junwenxiong
opened
3 weeks ago
5
python test_on_trained_model_by_us.py error
#14
CarlHuangNuc
opened
1 month ago
0
Whether support Stage2:Multi-task Pretraining in this code?
#13
CarlHuangNuc
opened
1 month ago
0
Qwen2-VL support detection
#12
WangRongsheng
closed
1 month ago
4
Training Data Apporach
#11
kiranmaya
closed
1 month ago
3
pretrain 是否会支持呢
#10
Wangman1
opened
1 month ago
2
Why there is no padding or truncation based on max length
#9
PrAsAnNaRePo
opened
1 month ago
1
多卡训练VL-7B时报错;显存是足够的,这个训练方式是每张卡一个模型吗?
#8
weilanzhikong
opened
1 month ago
1
请问如何使用lora微调呢
#7
Nieleilei
opened
1 month ago
0
Working with HuggingFace
#6
wjbmattingly
opened
2 months ago
5
如何进行图片中特定内容描述?
#5
Guangming92
opened
2 months ago
1
能训练视频吗?
#4
SixGoodX
closed
2 months ago
1
apply_chat_template raise ValueError( ValueError: No chat template is set for this processor. Please either set the `chat_template` attribute, or provide a chat template as an argument.
#3
lonngxiang
closed
2 months ago
1
运行flash_attn带来的错误
#2
lonngxiang
closed
2 months ago
12
这份代码微调不算lora微调?具体算是微调的哪层呢
#1
lonngxiang
closed
2 months ago
8