-
### System Info
- `transformers` version: 4.44.2
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate vers…
-
Hi there,
I was struggling on how to implement quantization on autoawq as you mentioned in home page. I was trying to quantize 7b qwen2 vl but no matter I use 2 A100 80Gb vram, I still get cuda oom…
-
My code is throwing the error below:
```
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/net/scratch/user/miniconda3/envs/vl…
-
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: …
-
I am currently conducting research on applying Transformer models to EEG signals. In your paper, I learned that by combining CNN and Transformer, it is possible to learn both global and local features…
-
直接使用pip install -r requirements_web_demo.txt,运行测试demo失败,安装尝试了pip install git+https://github.com/huggingface/transformers
-
Thank you for your great work! I am not familiar with LLM, NLP tasks and related models i.e. Bert or GPT. To make my own understanding of SpikeLM models, I read two good repositories. The first is the…
-
### Feature request
I am trying to train off-line RL using decision transformer, convert to .onnx.
```
from pathlib import Path
from transformers.onnx import FeaturesManager
feature = "seq…
-
how to run on colab t4
Qwen2.5-7B-Instruct-GPTQ-Int4
#!pip install auto-gptq transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_gptq import AutoGPT…
-
### Your question
sudo /usr/local/php-8.3.1/bin/php ./vendor/bin/transformers download openblas-linux-x86_64-0.3.27
✔ Initializing download...
PHP Fatal error: Uncaught TypeError: Codewithkyrian…