-
```
from transformers import AutoTokenizer
from optimum.neuron import NeuronModelForCausalLM
```
results in
```
RuntimeError: Failed to import optimum.neuron.modeling because of the fol…
-
### System Info
- `transformers` version: 4.46.0
- Platform: Linux-4.15.0-76-generic-x86_64-with-glibc2.27
- Python version: 3.12.5
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
…
-
### Expected Behavior
I shouldnt be getting CUDA error.
### Actual Behavior
I am not able to use comfyui after the last update, it was working fine yesterday.
### Steps to Reproduce
It is not ab…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
hi,
I basically followed: https://www.modelscope.cn/models/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8
and thought the `24G` gpu memory would be enough for the model:
![image](https://github.co…
-
运行官方demo,05-Qwen2.5-7B-Instruct Lora .ipynb在notebook的最后:
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
报错
C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Te…
-
Provide OpenCE v1.7.11 for all platforms except PPC CUDA with following fixes
- [ ] Fix CVE-2024-34069 for werkzeug
- [ ] Fix CVE-2024-3568 for transformers
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…
-
### 🐛 Describe the bug
The following example produces a node with no user.
```python
def test_mistral_nousers(self):
import torch
import transformers
config = tra…
-
### System Info
- `transformers` version: 4.44.2
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate vers…