issues
search
baichuan-inc
/
Baichuan-13B
A 13B large language model developed by Baichuan Intelligent Technology
https://huggingface.co/baichuan-inc/Baichuan-13B-Chat
Apache License 2.0
2.98k
stars
236
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
npu部署
#206
httang1224
opened
6 months ago
0
如何加速模型推理速度?
#205
hunfwj
opened
7 months ago
0
解决
#204
ke-01
closed
7 months ago
0
想问一下百川2量化版本的算法是什么?
#203
huangxiancun
opened
8 months ago
0
Baichuan13B vllm 效果很差
#202
moseshu
opened
11 months ago
1
feat: function calling
#201
wey-gu
opened
11 months ago
0
将训练好的模型进行放入到web.demo中报错
#200
ghh1125
opened
11 months ago
0
各位大佬,请问采用官网给出的fine-tune文件做微调大概需要多少显存,使用A6000(48G)显示内存溢出。
#199
nevesaynever1
closed
12 months ago
0
各位大佬,微调baichuan2-13b后得到pth文件,该如何推理
#198
dongdongqiang2018
opened
1 year ago
0
v100能部署Baichuan-13B-Base么?
#197
JasonFlyBeauty
opened
1 year ago
0
本地部署版本问题
#196
JasonFlyBeauty
closed
1 year ago
2
baichuan-13b-chat批量生成示例
#195
MrInouye
opened
1 year ago
0
baichuan2 mmlu结果复现的问题
#194
zhanghan1992
opened
1 year ago
0
请问下,大家都是租用GPU服务器来运行大模型吗
#193
jiuwenyu
closed
1 year ago
0
这个模型不支持多gpu模式吗
#192
394988736
closed
1 year ago
0
ValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for them. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in this format.
#191
klj123wan
opened
1 year ago
0
ValueError: Tokenizer class BaichuanTokenizer does not exist or is not currently imported.
#190
lonngxiang
opened
1 year ago
2
如何离线部署?
#189
wangnaihao
opened
1 year ago
1
baichuan-13b-chat sft微调loss不下降
#188
xiaohuihwh
opened
1 year ago
1
对baichuan13b还没有开始微调,仅仅是对话就自言自语?总是泄露Human: Assistant:对话数据
#187
hahajinghuayuan
opened
1 year ago
1
调用api.py流式代码谁能分享一下啊
#186
xuyaokun
opened
1 year ago
2
Alibi编码为什么和标准的Alibi编码不一致?
#185
wx971025
opened
1 year ago
0
在 prompt中增加长度限制,无法生效
#184
Jessie37464
opened
1 year ago
0
fasttransformer inference
#183
HalcyonLiang
opened
1 year ago
0
有时候卡住一直在输出,model.chat一直在等待响应很久,有没有办法快速结束这种超长响应的办法
#182
janglichao
opened
1 year ago
1
如何把采样的随机种子固定?
#181
wangzhijian-tal
opened
1 year ago
0
请问如何单卡运行模型推理
#180
tanglu86
closed
1 year ago
1
Add OpenCompass badge in README
#179
vansin
opened
1 year ago
0
webdemo在多用户并发时,输出结果会变得很慢?
#178
jamesruio
opened
1 year ago
1
[Evaluation] 提供 Baichuan 模型在 OpenCompass 上的评测结果
#177
Leymore
opened
1 year ago
0
cli_demo.py 切换Baichuan-13B-Base 问答异常
#176
cgq0816
opened
1 year ago
2
使用baichuan-13b训练reward model loss不下降
#175
zhangzuizui
opened
1 year ago
0
Deploy Failed
#174
hedongyan
closed
1 year ago
1
请问自动下载模型 下载到哪里去了?
#173
lonely1215225
opened
1 year ago
0
alibi的mask和论文不一致
#172
ReactiveCJ
opened
1 year ago
4
一台机器有2张RTX3060显卡,跑官方web_demo的时候报了异常
#171
youyajike
opened
1 year ago
1
百川怎么做预训练或增量预训练????
#170
ArtificialZeng
opened
1 year ago
0
推理时使用BF16和FP32结果差距较大,这是否符合预期
#169
NicholasYoungAI
opened
1 year ago
0
Baichuan-13B-Chat对提出过的问题有没有记忆力?
#168
drpanhuaming
opened
1 year ago
1
每次提问页面都出现“好的”这两个多余的字
#167
drpanhuaming
opened
1 year ago
0
并发请求大模型接口的时候,模型推理速度线性增加,有没有一个比较好的增加推理速度的方案?
#166
chaotec
opened
1 year ago
2
有人在macbook pro 16GB上跑过百川13B吗?
#165
mosthandsomeman
opened
1 year ago
1
百川的开源版本模型有计划增强一下function calling的能力吗?
#164
huajianmao
opened
1 year ago
2
模型推理速度变慢
#163
tianbuwei
closed
1 year ago
2
词表里 \U0010fc06 和 <reserved_7> 这两类有什么妙用吗?
#162
CanvaChen
closed
1 year ago
0
Baichuan tokenizer对诗词分词不准确
#161
CanvaChen
closed
1 year ago
3
两张A10,可以进行baichuan-13b-chat的lora微调吗
#160
suihuoliuying
opened
1 year ago
0
web_demo.py 运行时报错 CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
#159
matyhtf
opened
1 year ago
1
对大模型进行int8和int4量化后,如何保存模型?
#158
wanglaiqi
opened
1 year ago
0
能否适配text-generation-webui的lora微调?
#157
BUJIDAOVS
opened
1 year ago
0
Next