Open gamercoder153 opened 4 months ago
Our conversational notebooks add eos_token
s to llama-3 for eg: https://colab.research.google.com/drive/1XamvWYinY6FOSX9GLvnqSjjsNflxdhNc?usp=sharing
All our notebooks on our Github page here: https://github.com/unslothai/unsloth?tab=readme-ov-file#-finetune-for-free add eos tokens
Iam facing this error with your colab notebook during inferencing: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
I'm facing the same problem here.
@mxtsai which model?
I've tried LLama 3 and other models. Not sure where the issue is...
oh wait llama-3 base right hmm where are you all doing inference - ollama? llama.cpp?
@danielhanchen in colab after finetuning
@danielhanchen in colab after finetuning
I was having the same issue and created an issue #416 . I've posted a solution here.
@KillerShoaib Man thanks a lot for fixing, I really appreciate that. Can u explain to me where to add that I am using Google Colab t4 GPU: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
@KillerShoaib @gamercoder153 @mxtsai Apologies I just fixed it! No need to change code - I updated the tokenizer configs so all should be fine now!
@KillerShoaib Man thanks a lot for fixing, I really appreciate that. Can u explain to me where to add that I am using Google Colab t4 GPU: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
as @danielhanchen mentioned, you don't need to change the code anymore, The bug has been fixed.
great! thanks a lot guys @KillerShoaib @danielhanchen
@danielhanchen @KillerShoaib I checked it once again it literally the same
@danielhanchen @KillerShoaib I checked it once again it literally the same
Okay, I think you're using your finetuned model which was finetuned on top of old unsloth llama 3 (where pad token and eos token were the same). In that case, you need to change the pad token value.
Here is the code to do that:
################################### Existing Colab Code ###################################
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
"unsloth/mistral-7b-bnb-4bit",
"unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"unsloth/llama-2-7b-bnb-4bit",
"unsloth/gemma-7b-bnb-4bit",
"unsloth/gemma-7b-it-bnb-4bit", # Instruct version of Gemma 7b
"unsloth/gemma-2b-bnb-4bit",
"unsloth/gemma-2b-it-bnb-4bit", # Instruct version of Gemma 2b
"unsloth/llama-3-8b-bnb-4bit", # [NEW] 15 Trillion token Llama-3
] # More models at https://huggingface.co/unsloth
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "your_finetuned_model_name", ##### Change the name according to your finetuned model #####
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
## If your model already saved as LoRA adapter then you don't need to use .get_peft_model()
model = FastLanguageModel.get_peft_model(
model,
r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
################## Additional Code to change pad token value ###################
tokenizer.add_special_tokens({"pad_token": "<|reserved_special_token_0|>"})
model.config.pad_token_id = tokenizer.pad_token_id # updating model config
tokenizer.padding_side = 'right' # padding to right (otherwise SFTTrainer shows warning)
################## Rest of the colab code ###################
.
.
.
After changing the pad token value you need to fine-tune the model again so that it can learn to predict EOS
token. Try few iterations (i.e: 30-50) and check if model is able to generate eos token or not.
This example is for those models that have been fine-tuned on top of old unsloth llama 3 ( same pad & eos token). Unsloth has updated their model. If any of you using their current llama 3 model then you won't have to follow these steps. Follow the original Colab notebook
@KillerShoaib Iam using this colab notebook : https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing from their github I am using llama3-8b-instruct model : https://huggingface.co/unsloth/llama-3-8b-Instruct
@KillerShoaib Iam using this colab notebook : https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing from their github I am using llama3-8b-instruct model : https://huggingface.co/unsloth/llama-3-8b-Instruct
I've just downloaded unsloth/llama-3-8b-Instruct and verified its pad token and eos token value. They are different as @danielhanchen mentioned he has solved the issue.
I even trained the model on alpaca dataset for 60 epochs and got an answer with an eos
token
Everything is working fine on my end. Are you sure you aren't using already fine-tuned version of old llama 3 (which had same eos & pad token) that you've saved locally (or huggingface hub) and loading it again?
@KillerShoaib No, Iam not using any older finetuning model. Let me try it once again
@KillerShoaib Its the same stuff dude!! just generates <|end_of_text|> sometimes and goes on loop like this till 128 max new tokens
and if text streaming is true it does the same thing again
@KillerShoaib Its the same stuff dude!! just generates <|end_of_text|> sometimes and goes on loop like this till 128 max new tokens
and if text streaming is true it does the same thing again
since you're getting eos token sometimes then there is no problem in the llama 3 model. You need to finetune it for more iterations. The model is still learning to predict the eos token.
adding pad_token_id solved this issue for me
outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = False, pad_token_id=tokenizer.pad_token_id)
@adeel-maker which notebook are u using?
@gamercoder153 Kaggle or Google colab
@adeel-maker can u share it
@gamercoder153 same as provided by unsloth, just putting my data there!
adding pad_token_id solved this issue for me
outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = False, pad_token_id=tokenizer.pad_token_id)
in the notebook where did u add this section of code @adeel-maker
@gamercoder153 at inference portion of this notebook, right after the training portion!
@adeel-maker ok let me try
If you're using the instruct model, you need to change the EOS token. The tokenizer still has the EOS token as <|end_of_speech|> when it should be <|eot_id|>. When you build your Alpaca dataset, change this line:
EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN
to this:
EOS_TOKEN = <|eot_id|> # Must add EOS_TOKEN
how to add eos token