Open asmith26 opened 9 months ago
The reason you're getting the error is because when you save your model with
model.save_pretrained("mistral-finetuned-gpu")
You're actually saving the peft model only, and not a complete model. You're doing fine-tuning with lora since you call
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj", ],
lora_alpha=16,
lora_dropout=0, # Supports any, but = 0 is optimized
bias="none", # Supports any, but = "none" is optimized
use_gradient_checkpointing=True,
random_state=3407,
max_seq_length=max_seq_length,
)
To actually use the your model after training, you'll have to merge your lora weights back with the original model you trained on.
I reccomend checking this guide on huggingspace for more information on how to do that.
@asmith26 @mathewpan2 Actually this might look like a bug - i'll get back to you all! Sorry!
Oh wait @asmith26 could you try upgrading Unsloth
pip install --upgrade --force-reinstall --no-cache-dir git+https://github.com/unslothai/unsloth.git
The above will not install any new dependencies as well.
It's because I tried it in Colab and it seems fine - also I noticed the error is
File "~/miniconda3/envs/unsloth/lib/python3.11/site-packages/unsloth/models/loader.py", line 68, in from_pretrained
model_config = AutoConfig.from_pretrained(model_name)
which I'm guessing is an older version of Unsloth - AutoConfig
is now on line 83 and not 68!
Thanks very for your help @danielhanchen. Upgrading and using the info from @mathewpan2 (also thanks!) I think I've got this to work:
import torch
from peft import PeftModel
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("unsloth/mistral-7b-bnb-4bit")
model = PeftModel.from_pretrained(model, "./mistral-finetuned-gpu")
tokenizer = AutoTokenizer.from_pretrained("unsloth/mistral-7b-bnb-4bit")
inputs = tokenizer.encode("This movie was really great!", return_tensors="pt").to("cuda")
with torch.no_grad():
logits = model(input_ids=inputs).logits
predicted_class_id = logits.argmax().item()
print(model.config.id2label[predicted_class_id])
I also tried using unsloth directly, but I can't seem to get it to work (not sure if I need to tell unsloth this is a SequenceClassification task somehow?):
import torch
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained("./mistral-finetuned-gpu")
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer.encode("This movie was really great!", return_tensors="pt").to("cuda")
with torch.no_grad():
logits = model(input_ids=inputs).logits
predicted_class_id = logits.argmax().item()
print(model.config.id2label[predicted_class_id])
@asmith26 Oh I did not see this - apologies - I fixed the first bug you described. On the 2nd issue - ye sadly we don't provide a function to load up AutoModelForSequenceClassification
:( Sorry :(
No problem, thanks for the info - I'm happy to train with unsloth and infer directly with huggingface. So please feel free to close this issue if helpful :)
@asmith26 Oh its fine - it'll be a feature request :)
Hi, I'm training a model (essentially copied from https://huggingface.co/blog/unsloth-trl#unsloth--trl-integration):
How can I now load this model locally? I'm trying:
unfortunately this yields:
I also tried renaming the files to fix this, but then I got:
Many thanks for any help, and this amazing lib!