-
### from the new version,I build it but I cant import awq,
- transformers 4.43.3
- torch 2.3.1
- torchaudio 2.4.0
- torchvision 0.19.0
…
-
### Motivation
Many `lmdeploy` counterparts(vllm, transformers, exllamav2...) provide `logits_processors` that allow users to modify the logits before softmax. This enables many useful features like …
-
### Describe the bug
running latest build results in torch error
```
python server.py --api --listen --n-gpu-layers 32 --threads 8 --numa --tensorcores --trust-remote-code
```
...
```
Runtime…
-
Hello there, when using the Google Colab. I reached this step:
```
from trl import SFTTrainer
from transformers import TrainingArguments, DataCollatorForSeq2Seq
from unsloth import is_bfloat16_s…
-
### Feature request
please provide Support for gemma2 Model Export in Optimum for OpenVINO
version:optimum(1.21.4)
transformers:4.43.4
### Motivation
I encountered an issue while trying to expor…
-
### System Info
Output from `transformers-cli env`:
```
- `transformers` version: 4.45.2
- Platform: Linux-6.1.0-21-cloud-amd64-x86_64-with-glibc2.36
- Python version: 3.12.5
- Huggingfa…
-
Apologies if this is obvious / impossible, but is there any way to open the pre-trained models using the huggingface transformers library? Looking to use this model in a paper but my pipeline is buil…
-
Vulnerable Library - transformers-4.19.2-py3-none-any.whl
State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Library home page: https://files.pythonhosted.org/packages/52/82/b62f139e7…
-
# Action Plan
- [ ] #141
- [ ] Load Hugging Face Object Detection datasets via nrtk-explorer CLI arg
- [ ] "Launcher" to select interesting combos of models/datasets to view with nrtk-explorer
…
-
run on Mac M3 Max 128GB
run this code
```
from transformers import AutoModel, AutoTokenizer
MAX_LENGTH = 128
model = AutoModel.from_pretrained("unsloth/Meta-Llama-3.1-405B-Instruct-bnb-4b…