-
**Describe the bug**
The `ilab model train` finishes with the error "Expected `list[str]` but got `tuple` with value `('q_proj', 'k_proj', 'v_proj', 'o_proj')` - serialized value may not be as ex…
-
Hi, great tool!
To make it even better, it would be nice to also support protein level result export to .txt/.tsv, as is implemented for the peptide level table.
at least pFind.protein cannot read…
-
你好,多节点模型并行微调结束后,我尝试使用cli_demo_mp.pt进行推理,但是在load model时出现错误信息,model_parallel_size和当前配置文件不正确,详细信息如下:
![image](https://github.com/THUDM/VisualGLM-6B/assets/38753856/447d54da-f1e2-4525-a9f1-7bfd340cd2a1)…
-
So I have a GPTQ llama model I downloaded (from TheBloke), and it's already 4 bit quantized. I have to pass in False for the load_in_4bit parameter of:
```
model, tokenizer = FastLlamaModel.from_pr…
-
### System Info
- `transformers` version: 4.45.1
- Platform: Linux-5.10.225-213.878.amzn2.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.24.5
- Safetensors …
-
I'm trying to improve localGPT performance, using constitution.pdf as a reference (my real .pdf docs are 5-10 times bigger than constitution.pdf, and answers took even more time).
1. I used 'TheBlo…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue y…
-
### System Info
```
python:3.12.4
transformers:4.45.2
trl:0.11.4
huggingface:0.25.2
accelerate:1.0.1
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### …
-
### Your current environment
- vLLM CPU : v0.6.0
- Hardware: Intel(R) Xeon(R) Platinum 8480+ CPU
- Model: google/gemma-2-2b
### 🐛 Describe the bug
vLLM v0.6.0 (cpu) is throwing below erro…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
The GPU memory doesn't change…
hxdbf updated
11 months ago