Open subhamiitk opened 2 weeks ago
Hmm I think it was a known issue actually on reloading runs - itll get NaNs - tbh im not sure why yet - my recent change today might have solved it, but unlikely - it'll be great if you could try it out :)
To update Unsloth:
pip uninstall unsloth -y
pip install --upgrade --force-reinstall --no-cache-dir git+https://github.com/unslothai/unsloth.git
Hi, I performed the CPT for my domain following the Continued Pretraining notebook. Now when I am trying to perform the fine-tuning and applying a different LoRA config, it gives me error
TypeError: Unsloth: Your model already has LoRA adapters. No need to run this again!
For fine-tuning, I wanted to use different LoRA config(smaller r and different target_modules than pre-training) but I am not able to apply this config.Currently, I changed the model_name to the checkpoint obtained after CPT for the fine-tuning. Below is the code for reference:-
Can someone please suggest me what changes are required to perform fine-tuning with different LoRA config than the CPT?
Also, if I ignore applying new LoRA config and use the same applied during the pre-training, the loss doesnt seem to be converging at all.