-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
我目前使用chatglm3-6b模型进行评测,我使用的是4个A800显卡(80G),在跑v0.2版本的代码时,部分数据集跑不通报错。详情如下:
对于instruct、review、plan json数据集评测正常,但是对于plan str、retrieve str,在运行到中途的时候会报如下错误:
```
Traceback (most recent call last):
F…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
The GPU memory doesn't change…
hxdbf updated
11 months ago
-
## detail | 详细描述 | 詳細な説明
您好,我在跑python3 -m huixiangdou.main --standalone时,碰到如下报错:
说明:
1. 本地服务器部署
2. 模型:bce-embedding-base_v1, bce-reranker-base_v1 以及 internlm2-chat-7b均下载自hf-mirror.com
3. …
-
这是我的文件结构图,里面已经把模型下载好了

但我用怕跑的时候报这个错误
/home/shf/anaconda3/envs/llama/bin/python /media/sh…
-
## Issue1 on xpu with python 3.10 [Fixed after releasing bigdl-core-xe and bigdl-core-xe-esimd for python 3.10]
on Arc14, I followed https://github.com/intel-analytics/BigDL/blob/main/python/llm/exa…
-
感谢博主开源~
最近试用了InternLM-XComposer-VL-7b模型,效果很棒,就是推理的速度有点慢,目前使用V100进行推理,显存26G,耗时10s/条,想问下模型有什么推荐的推理加速方法么,还望博主给点建议
另外还尝试了internlm/internlm-xcomposer-7b-4bit模型,相同机器环境,显存从26G降到20G,耗时翻倍20s/条,不知道是不是我哪里设置的…
-
"addmm_impl_cpu_\" not implemented for 'Half'
-
我在通过python examples/gradio_demo_chat.py --code_path=/mnt/tenant-home_speed/model/internlm-xcomposer2-4khd-7b/ --port 7804 运行internlm-xcomposer2-4khd-7b模型的时候出现以下报错,提示TypeError: Accordion.__init__() …
-
Both setting up MindSearch API and Frontend completed correctly, using the provided
`python -m mindsearch.app --lang en --model_format internlm_server --search_engine DuckDuckGoSearch`
command.
Fro…