-
### System Info
The current Transformers framework doesn't support the gguf quantized model files from deepseek2. Can you please advise when this support might be added? @SunMarc @MekkCyber
###…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
`gh-changelog` now supports config files in the current directory however today it wont create them.
Users are required to copy the default config from `~/.config/gh-changelog/.changelog`.
In th…
-
### System Info
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- PyTorch version: 2.4.1
- CUDA device: NVIDIA A100-SXM4-80GB
- Transformers version: 4.45.0.…
-
The link inside the sample config file, in the section headed **Auto-Restart limitations and Cold Restart** is out of date. It currently redirects to:
https://www.ibkrguides.com/traderworkstation/…
-
### Tested versions
Reproducible in:
- v4.4.dev4.official [36e6207bb] (latest version tested)
- v4.3.stable.official [77dcf97d8]
- v4.2.2.stable.official [15073afe3]
- v4.0.alpha1.official [31a…
-
### System Info
Ubuntu 24.04
Transformers 4.46.2
### Who can help?
@ArthurZucker @Cyrilvallez
[[Cyrilvallez](https://github.com/Cyrilvallez)](https://github.com/huggingface/transformers/pull/3…
-
**Describe the bug**
When using the preset W8A8 recipe from llm-compressor, the output results in a model config.json that fails validation when loaded by HF Transformers. This is a dev version of Tr…
-
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.19
- Huggingface_hub version: 0.23.5
- Safetensors ve…
-
I tried to load the model with transformers.AutoModel.from_pretrained, but I got this error:
```
Exception has occurred: KeyError (note: full exception trace is shown but execution is paused a…