issues
search
Qihoo360
/
360zhinao
360zhinao
Apache License 2.0
263
stars
22
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
按照这个调用方法执行reranking模型 报错没有这个model id
#19
standing88
opened
1 day ago
0
when run rerank model demo error...
#18
hegang1-tal
closed
3 weeks ago
0
RAG应用中如何使用qihoo360/360Zhinao-1.8B-Reranking,可以给个例子吗?
#17
shizidushu
opened
1 month ago
3
Corrected bibtex in README.md
#16
XinrunDu
closed
1 month ago
0
vllm bug
#15
zky001
closed
1 month ago
0
Token indices sequence length is longer than the specified maximum sequence length for this model (8473 > 4096). Running this sequence through the model will result in indexing errors
#14
81286300
opened
2 months ago
0
{ "msg": "name 'message' is not defined", "status_code": 500 }
#13
linjianwei888
opened
2 months ago
0
多模态大模型访问方式
#12
77h2l
opened
2 months ago
0
运行微调时报错“torch.cuda.OutOfMemoryError: CUDA out of memory. ”
#11
alfiy
opened
2 months ago
1
运行微调代码时提示#error "PYTHON < 3.6 IS UNSUPPORTED. pybind11 v2.9 was the last to support Python 2 and 3.5.错误
#10
alfiy
opened
2 months ago
1
Failed building wheel for flash-attn
#9
jsoncode
closed
2 months ago
1
项目运行360Zhinao-7B-Chat-32K 'NoneType' object is not callable
#8
choshiho
opened
3 months ago
10
项目无法运行推理360Zhinao-7B-Chat-360K-Int4和360Zhinao-7B-Chat-32K-Int4两个量化版
#7
qingfeng2018
closed
2 months ago
2
BFloat16 is not supported on MPS
#6
coldwater2000
closed
3 months ago
0
问问题的时候出现错误,报错
#5
flyfox666
closed
3 months ago
1
关于模型大小及需要gpu内存大小的问题
#4
p7759809
closed
3 months ago
0
Not compatible with vllm 4.0
#3
zollty
closed
2 months ago
2
是否会支持llama.cpp模型转换至gguf?
#2
flyfox666
closed
2 months ago
1
是否提供磁力
#1
hggq
closed
3 months ago
2