-
### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
### Describe the bug
Traceback (most recent call l…
-
Qwen2刚出来的,翻译质量比以前提高了不少
聊天入口:https://www.modelscope.cn/studios/qwen/Qwen2-72B-Instruct-demo/summary
模型入口:https://www.modelscope.cn/models/qwen/Qwen2-72B-Instruct/summary
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/eosphoros-ai/DB-GPT/issues?q=is%3Aissue) and found no similar issues.
### Operating system information
Linux
### P…
-
> Higress GitHub page: https://github.com/alibaba/higress
## Step 1: Create a file named `docker-compose.yml` and fill in the following content:
> Note:
> 1. Replace `YOUR_DASHSCOPE_API_KEY` wi…
-
升级到0.3.1版,发现对于普通文字问答,没问题,但对于有图生文,发现还是无法正常使用,比如我用的InternVL-Chat的模型,,问题都集中在 dialogue的client.chat.completions.create,无论流式输出是true或false,都会提示500错误,建议群主如果确定能支持图生文功能的话,能针对图生文的用法和注意事项给一个详细的解释。
-
在A8002卡(每张卡80G显存)机器上启动xinference,启动了两个模型,第一个是deepseek模型,启动正常。第二个是qwen-14B模型
当在卡1上启动qwen-14B模型会报下面的错误
allel_size': 1, 'block_size': 16, 'swap_space': 4, 'gpu_memory_utilization': 0.9, 'max_num_seqs':…
-
感谢开发者的辛勤付出!!!
我在构建docker容器运行Qwen2.5-7b时,遇到一些错误。
错误信息如下:
```
==========
== CUDA ==
==========
CUDA Version 12.2.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATE…
-
### Describe the bug
`interpreter --local
:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
▌ Open Interpreter is compatible with several local model providers.
[?] What one would yo…
-
### What is the issue?
When I run the CLI ollama run qwen2:72b-instruct-q2_K
then download the model and run the model。
rError: llama runner process has terminated: signal: aborted (core dumped)
…
-
LLaMA Factory 支持了 GLM-4-9B 和 GLM-4-9B-Chat 模型的**指令微调、RLHF、DPO 和 SimPO** 等优化方法
https://github.com/hiyouga/LLaMA-Factory/blob/main/README_zh.md
### 指令微调
```bash
CUDA_VISIBLE_DEVICES=0,1 HF_END…