-
Hi @coljac , thanks for this. I was checking if I could do this myself, with some Dockerfile to ensure it will work despite being updated
I think this installs unsloth based on main branch, and wha…
-
-
Currently, Unsloth can only support single GPU training, how can you implement it with 8-GPU training? Thx
-
I am just curious whether the current unsloth support the full finetune.
So when I am experimenting training tinyllama model on 24GB vram GPU right now. Using unsloth to just load the model without l…
-
import os
import random
import functools
import csv
import pandas as pd
import numpy as np
import torch
import torch.nn.functional as F
import evaluate
from sklearn.datasets import make_cla…
-
new version of transfomer, no need to use BetterTransformer, try setting attn impl to sdpa...
attn imp:
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used …
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
Latest LLaMA-Factory repo 12Septr2024 forces to use Torch 2.4 hence is clashing with Unsloth/XFormers
##…
-
Hi,
I tried to do the SFT using following models. 'unsloth/Meta-Llama-3.1-70B-Instruct-bnb-4bit' and 'unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit'.
But 'TypeError: '
-
Hi. Something (possibly not unsloth) changed between July and now.
I am getting an unexpected OOM error trying to do a LORA finetune. This worked before, but is now barfing.
Looked at #338, but not…
-
pip install "unsloth[cu121-ampere-torch240] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu118-ampere-torch240] @ git+https://github.com/unslothai/unsloth.git"
pip install "u…