-
### System Info
!pip install git+https://github.com/huggingface/trl.git
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported ta…
-
I get this error:
```
Traceback (most recent call last):
File "/home/denis/Documents/ai/unsloth/llama3-chat-template.py", line 20, in
model, tokenizer = FastLanguageModel.from_pretrained(…
-
### System Info
```Shell
- `Accelerate` version: 1.0.1
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.31
- `accelerate` bash location: /home/lmh/.local/bin/accelerate
- Python version: 3.10…
-
Run folloing code, copy from README file
import os
from pycsghub.repo_reader import AutoModelForCausalLM, AutoTokenizer
os.environ['CSG_TOKEN'] = 'my token from setting'
m…
-
### System Info
- `transformers` version: 4.45.1
- Platform: Linux-5.4.247-162.350.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.12
- Huggingface_hub version: 0.24.0
- Safetensors v…
-
Thank you for taking the time to review my question.
Before I proceed, I would like to mention that I am a beginner, and I would appreciate your consideration of this fact.
I am seeking assistan…
-
Can you please export this jupiter notebook thing of Llama-3-PyTorch .ipynb to PURE PYTHON as Llama-3-PyTorch_model.py and Llama-3-PyTorch_tokenizer.py
Because I want to try to adapt this to work w…
-
Why does my 32GB of memory fill up and the system become very sluggish when processing a 4-minute video, resulting in the following error?
SAMURAI mode: True
[15:18:19] D:\a\decord\decord\src\vide…
-
Config:
Windows 10 with RTX4090
All requirements incl. flash-attn build - done!
Server:
```
(venv) D:\PythonProjects\hertz-dev>python inference_server.py
Using device: cuda
Loaded tokeniz…
-
### System Info
- `transformers` version: 4.46.2
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.12.7
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate ve…