issues
search
QwenLM
/
Qwen2.5
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
9.38k
stars
580
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
[Bug]: The final reason why you will get a model that cannot stop generation when you fine-tune the Qwen2.5-7b-base use Lora and a non-<|endoftext|> token as eos_token.
#1064
hxs91
opened
1 day ago
0
[Bug]: 使用Qwen2.5-72B-Instruct-AWQ和Qwen2.5-32B-Instruct-AWQ官方给的演示例子,运行的时候提示:returned non-zero exit status 1
#1059
SuSuStarSmile
opened
2 days ago
0
A simplified version of the inference code ?
#1057
weizhenhuan
opened
3 days ago
0
[Bug]: Use of the term "open source" to describe qwen when the training data is not open
#1055
phly95
opened
3 days ago
3
[Bug]: Model name error in vllm deployment
#1052
JulioZhao97
closed
5 days ago
4
[Bug]: AttributeError: Model Qwen2ForCausalLM does not support BitsAndBytes quantization yet.
#1049
yananchen1989
opened
1 week ago
1
[Bug]: can not deploy qwen2.5 with vllm
#1048
joyyyhuang
closed
1 week ago
1
Rebuild docs for speed benchmark
#1045
wangxingjun778
closed
3 days ago
0
Add Qwen2.5 perf report
#1044
wangxingjun778
closed
1 week ago
0
add vllm version warning
#1043
jklj077
closed
1 week ago
0
[Badcase]: 用mlx框架把Qwen2.5-3B-Instruct转换为mlx版本后,又用mlx_lm.convert把mlx版本的模型进行int4量化,最后进行推理的时候报错:ValueError: [dequantize] The matrix should be given as a uint32
#1041
ghoshadow
opened
1 week ago
0
[REQUEST]:
#1040
DAAworld
closed
4 days ago
1
[Bug]: 模型回答时极大概率出现�
#1039
GitHub-lql
opened
1 week ago
2
[Bug]: 使用 Xinference vLLM 启动 qwen2.5-32b-instruct 推理结果都是感叹号
#1038
andylzming
opened
1 week ago
3
[Badcase]: Qwen2.5-7B-Instruct few shot能力很差,具体MedBench的SMDoc数据集
#1032
tzyodear
opened
2 weeks ago
0
[Bug]: vllm启动大模型,超过一定的上下文长度导致大模型回答答非所问
#1031
Ave-Maria
closed
2 weeks ago
3
[Bug]: 对lora merge后的模型量化,量化后模型输出一直出现human:
#1029
shenshaowei
opened
2 weeks ago
3
Add modelers to README.md
#1028
vvmumu
opened
2 weeks ago
0
[Badcase]:
#1027
zhuzcalex
opened
2 weeks ago
0
[Badcase]: openai.BadRequestError: Error code: 400 - {'error': {'message': 'unexpected EOF', 'type': 'invalid_request_error', 'param': None, 'code': None}}
#1026
XyLove0223
opened
2 weeks ago
5
[Bug]: Qwen2.5-14B-Instruct-GPTQ-Int4存在严重的复读机和幻觉现象
#1024
yang-collect
opened
3 weeks ago
7
[Bug]: 在 4 卡 16GB V100 机器上采用 lmdeploy 部署 qwen2.5-32b-instruct-gptq-int4 模型,最高输出速度只有 80token/s ,请问这个速度正常吗?
#1023
SolomonLeon
opened
3 weeks ago
3
Qwen2.5-1.5b用LLaMA-factory lora微调后,vLLM加载模型报错,求教~
#1022
Jimmy-L99
closed
3 weeks ago
0
[Badcase]: qwen2.5一定概率生成\\n
#1021
520jefferson
closed
3 weeks ago
1
[Bug]: qwen2.5-72b-insruct math 自测分数和榜单分数差异较大
#1020
tianshiyisi
closed
2 weeks ago
7
[Bug]: vLLM got different results with PeftModelForCausalLM
#1018
chansonzhang
opened
3 weeks ago
3
[Badcase]: 相同的微调数据,Qwen1.5 14B准确率比Qwen2.5 14B高20%左右,这是什么原因
#1016
Jayc-Z
opened
3 weeks ago
1
[Bug]: vllm 启动,openai的swarm 函数调用不正常
#1015
18600709862
opened
3 weeks ago
2
[Bug]:模型加载参数精度及显存占用不符合预期,无法用int4精度加载量化后的模型Qwen2.5-7B-Instruct-GPTQ-Int4
#1014
yanli789
opened
3 weeks ago
3
[Bug]: No heartbeat received from MQLLMEngine
#1013
hulk-zhk
opened
3 weeks ago
0
[Bug]: Qwen 2.5 tool calls change function names after response
#1012
LuckLittleBoy
opened
4 weeks ago
3
[REQUEST]: Qwen的性能报告能否把首Token延迟也提供一下
#1011
zhufeizzz
opened
4 weeks ago
1
[Bug]: Qwen2.5-72B-instruct使用vllm部署通过函数调用输出的结果里所有汉字被转义了
#1009
ericg108
opened
4 weeks ago
1
[Bug]: vllm部署Qwen2.5-72B-Instruct压测出现报错
#1008
WangJianQ-0118
opened
1 month ago
1
[REQUEST]: Add finetuning scripts
#1007
chansonzhang
opened
1 month ago
1
[Bug]: Nvidia L20推理Qwen2.5 72B GPTQ-Int8模型不符合预期
#1006
renne444
opened
1 month ago
8
[Badcase]: 使用Qwen2-7b翻译中文,有额外的输出
#1005
cjjjy
opened
1 month ago
2
[Bug]: vllm部署后,用官方的例子调用报错openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "name 'Extension' is not defined", 'type': 'BadRequestError', 'param': None, 'code': 400}
#998
1gst
opened
1 month ago
3
[Bug]: 文档问答会忽略部分数据,比如证书号是12345 回答的是2345
#997
daimashenjing
opened
1 month ago
4
docs: Add OpenLLM
#996
Sherlock113
closed
1 week ago
3
关于function参数的格式问题
#994
XuyangHao123
closed
1 month ago
0
[Badcase]: qwen2.5-72b 在昇腾910推理结果不符合预期
#992
tianshiyisi
opened
1 month ago
6
[Badcase]: 函数调用出现不正常token(iNdEx)
#991
abiaoa1314
opened
1 month ago
1
mlx-lm documentation is corrected
#990
gringocl
closed
3 weeks ago
0
[Badcase]: Qwen2.5-72B-Instruct-GPTQ-Int4 input_size_per_partition
#986
hyliush
opened
1 month ago
6
[Badcase]: Qwen2.5 14B Instruct can't stop generation
#985
Jeremy-Hibiki
opened
1 month ago
1
[Question]: 本地部署Qwen2后的文件上传问题
#983
Patrick24080735
closed
1 month ago
1
add fine-tuning readme for llama-factory
#982
yangjianxin1
closed
3 weeks ago
1
update doc
#981
jklj077
closed
1 month ago
0
[Question]: From which version at least, the vllm supports to infrence and serve Qwen2.5-14B-Instruct model?
#960
zengqingfu1442
closed
1 month ago
0
Next