Issue: TypeError when initializing task_type using unsloth pipeline in SFT
I am trying to fine-tune the Mistral 7B quantized model and have introduced task_type as token_cls. However, I am encountering the following error:
**TypeError: dict() got multiple values for keyword argument 'task_type'.**
Here’s the code snippet I’m using:
model = FastLanguageModel.get_peft_model(
model,
r = 16, # Choose any number > 0! Suggested: 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_alpha = 16,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none",
task_type = 'TOKEN_CLS', # Supports any, but = "none" is optimized
use_gradient_checkpointing = "unsloth", # True or "unsloth" for long contexts
random_state = 3407,
loftq_config = None, # And LoftQ
)
**Question:**
Can the task_type be initialized using the unsloth pipeline in SFT?
Could someone help identify where changes are needed to fix this issue?
Thanks in advance!
Oh wait I don't think that's supported! If you're trying to predict 1 token as an output, just do normal finetuning, except set labels as the target you want
Issue: TypeError when initializing
task_type
usingunsloth
pipeline in SFTI am trying to fine-tune the Mistral 7B quantized model and have introduced
task_type
astoken_cls
. However, I am encountering the following error:Here’s the code snippet I’m using: