-
### Feature request
I would like to request that BetterTransformer not be deprecated.
### Motivation
I have come to rely on BetterTransformer significantly for accelerating RoBERTa and BERT models.…
-
**Is your feature request related to a problem? Please describe.**
My database unfortunately has BigInt types and as a result our prisma schema has models that contain BigInt. Unfortunately, as I'm…
-
## Issue Description
When attempting to convert a slow tokenizer to a fast tokenizer using the `transformers` library, an error occurred due to the inability to convert from Tiktoken. The error sug…
-
**Description**:
I'm attempting to load a pretrained model (ViT-B-16-SigLIP-i18n-256) entirely in offline mode within a Docker container and AWS Lambda environment. Despite setting the appropriate …
-
Bumps [transformers](https://github.com/huggingface/transformers) from 4.36.1 to 4.39.2.
Release notes
Sourced from transformers's releases.
Patch release v4.39.2
Series of fixes for backwards compa…
-
**Describe the bug**
When i installed the intel-npu-acceleration-library in 24.04, it happen Ubuntu version 24.04 unsupported issue
**To Reproduce**
device : Intel(R) Core(TM) Ultra 7 155H
OS: …
-
Hi!
First of all, congrats on such a great model!
I am an MLE at Hugging Face, and given the popularity and performance of your model, we are eager to integrate it into the Transformers 🤗 library. I…
-
### Feature request
It would be nice to allow fetching the token embeddings from a cross-encoding, which is necessary to implement systems such as retrieval augmented named entity recognition [(RA-NE…
-
Thanks for your great implementation.
Can you please verify or check the next issue when I try to run your provided colab file; when running the cell under the title: **Quantized Bonito Wrapper**, …
-
The following code behaves as expected:
```python
from wand.image import Image # MUST BE RUN BEFORE `from transformers import ... ` or will produce silent failure
from transformers import PreTrai…