-
How many gpus are needed to finetune? I have tried 16 PPUs (96GB each) but got CUDA OUT OF MEMROY
-
new version of transfomer, no need to use BetterTransformer, try setting attn impl to sdpa...
attn imp:
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used …
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
Latest LLaMA-Factory repo 12Septr2024 forces to use Torch 2.4 hence is clashing with Unsloth/XFormers
##…
-
I'm using the following setup in my project:
- PyTorch: 2.2.0
- Python: 3.10
- CUDA: 12.1.1
When running the command:
`pip install "unsloth[cu121-ampere-torch220] @ git+https://github.com/unslo…
-
Hi all, I am trying to fine-tune models in extremely long contexts.
I've tested the training setup below, and I managed to finetune:
- llama3.1-1B with a max_sequence_length of 128 * 1024 tokens
…
-
When I try to execute "from unsloth import FastLanguageModel" the following error appear.
---------------------------------------------------------------------------
TokenError …
-
```py
from unsloth import FastLanguageModel
from unsloth import is_bfloat16_supported
import torch
from unsloth.chat_templates import get_chat_template
from trl import SFTTrainer
from transform…
-
https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing#scrollTo=2eSvM9zX_2d3
%%capture
!pip install unsloth
# Also get the latest nightly Unsloth!
!pip uninstall u…
-
I'm trying to create a `conda` environment using the module `anaconda3` but it fails.
```sh
module load anaconda3
conda create --name unsloth.sl python=3.10
```
```
...
Downloading and Extrac…
-
Hi there, I wrote two methods that allow unsloth models to be loaded into memory and unloaded into memory. To my knowledge, I believe this is the only way to do change unsloth models
```
llm_mode…