-
Trying to finetune a model whose max seq length is 8k, _BAAI/bge-m3_. I'm trying to finetune on some retrieval task. Here's my trainer set up
```python
model = SentenceTransformer(model_id, de…
-
### System Info
```Shell
`Accelerate` version: 0.29.3
- Python version: 3.11.4
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the sc…
-
## Issue Description
**2nd meeting** and anticipated scheduling to begin work with CMS Team.
CMS team has a need for some content assistance before their contract wraps up on Oct. 11.
CMS / Acce…
-
On a simple transformer model, I am observing 10% training speed improvement with Hopper architecture GPUs but not with Ada ones. Baseline is bf16.
I am using huggingface accelerate to handle every…
-
### System Info
```Shell
Accelerate version: 0.34.0
Platform: Linux 5.15.0-101-generic #111-Ubuntu SMP x86_64 GNU/Linux
Python version: 3.10.14
PyTorch version (GPU?): 2.1.2+cu118 True
System RAM…
-
### 🚀 The feature, motivation and pitch
This paper might be of interest: https://arxiv.org/pdf/2403.09636.pdf
### Alternatives
_No response_
### Additional context
_No response_
-
How can we accelerate train on training with trainer loop or with native Pytorch? as the tutorial made by @NielsRogge
I am trying to do modifications to this notebook [collab](https://drive.googl…
-
Hello,
I noticed slowness in the evaluation procedure due to a misconfiguration related to the library pexpect. Adding the following line after the 85th line of the `base.py` file would accelerate …
-
Many webservers support accelerated downloads by sending a special header such as apache's X-Sendfile or the X-Accel-Redirect from nginx. It would be nice to have support for those. However, there nee…
igorw updated
14 years ago
-
Hi Tom,
I have been working and made lots of changes.
I found something that I think is not working properly when one goes into my (or Tony's) pages, through the button. When one is there, (scrolled…