-
### Your current environment
```
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: …
-
Hello dear 🤗Accelerate team!
As far as I know, there is no option for defining any other weights file name other than the default `pytorch_model.bin` or `model.safetensors` files when using `accele…
-
## 🚀 Feature
HF transformers implements 8 bit and 4 bit quantization. It would be nice if that feature can be leveraged for the xlm-r-xxl machine translation eval model.
### Motivation
The lar…
-
**The bug**
It seems that many models loaded with `models.Transformers()` error out with:
`AssertionError: The passed tokenizer does have a byte_decoder property and using a standard gpt2 byte_d…
-
### Your current environment
There are some related issues #2729 , #6723
The output of `python collect_env.py`
```text
Deploy model on V100
Versions of relevant libraries:
[pip3] flashinfe…
-
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like Flmc/DISC-MedLLM is not the path to a directory containing a file nam…
-
Is it possible to adapt this to cohere command-r models ?
-
Could someone please share some example code for how to generate text using a model with a soft prompt?
I have finetuned a soft prompt model (as implemented in this repo), however when I try to use…
-
Hi guys, hope this issue finds you well krub. we've a quick discussion at `Captial Market Datathon` yesterday if you can remember.
However, i already take a look at this `SETalyze` repo, handling…
-
I'm trying to use the using_t5.py script, but i get the above error.
I'm running on WIn10, python 3.8.
Transformers 4.3.3.
Any idea?
Thanks,
Dorit