-
### 🚀 The feature, motivation and pitch
Liger is missing support for a number of language model architectures. Additionally, I'd like to be able to patch any model without consideration for the mod…
lapp0 updated
3 weeks ago
-
### Your current environment
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (U…
-
**问题描述 / Problem Description**
本地局域网部署后,开启跨域后跨服务器访问一切正常。
部署到用户线上服务器后,无论如何配置跨域,调用api后都提示跨域。
![image](https://github.com/chatchat-space/Langchain-Chatchat/assets/7848823/a0f4ec25-72b9-4f4e-b21c-9206f…
-
使用 Dify 智能助手通过 xinference 与 chatglm3-6b 对话报错,基础助手对话正常。另外,使用 qwen-14 无论 基础助手还是智能助手都是正常的。
- xinference error logs
```
INFO 04-10 16:30:35 llm_engine.py:653] Avg prompt throughput: 24.8 tokens/s, …
-
执行python3 -m mlx_lm.lora --model models/Qwen1.5-32B-Chat --data data/ --train --iters 1000 --batch-size 16 --lora-layers 12报错:
`Repository Not Found for url: https://huggingface.co/api/models/model…
-
**环境信息 / Environment Information**
- langchain-ChatGLM 版本/commit 号:(例如:v2.0.1 或 commit 123456) / langchain-ChatGLM version/commit number: (e.g., v2.0.1 or commit 123456)
- 是否使用 Docker 部署(是/否):…
-
**I have already updated my nvidia driver(#331)**, but still getting the same error:
_RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at so…
-
### Your current environment
```text
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen1.5-0.5B-Chat --model /home/project/models/qwen-0.5b
```
### How would you like to use v…
-
https://developers.generativeai.google/
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
### Describe the bug
![image](https://github.…