Open Nikhil-void opened 1 month ago
As a temporary fix I recommend downgrade your peft version to 0.11.x, since they released 0.12 yesterday, and it looks that it's not compatible with current unsloth patching
As a temporary fix I recommend downgrade your peft version to 0.11.x, since they released 0.12 yesterday, and it looks that it's not compatible with current unsloth patching
I tried this but did not work I am now in dependency hell and conda cant figure that out.
Fixed - extreme apologies :( Please update Unsloth via below:
pip uninstall unsloth -y
pip install --upgrade --force-reinstall --no-cache-dir git+https://github.com/unslothai/unsloth.git
Colab and Kaggle need a notebook refresh - apologies on the issues
Hi Unsloth Team,
I encountered some issues while fine-tuning the Gemma model using Unsloth. I was running the example Colab notebook provided by Unsloth without any modifications, and the following messages were displayed:
Steps to Reproduce:
Open the example Colab notebook provided by Unsloth. Run the notebook without any modifications. Observe the output messages. Expected Behavior: The model should patch the relevant layers required for LoRA-based fine-tuning.
Actual Behavior: Unsloth could not patch the MLP, Attention, and O projection layers
Question: Is this the expected behavior, or is this an issue that needs to be addressed?
I would appreciate any guidance or troubleshooting steps you can provide to resolve this issue. Specifically, I am concerned about the impact this might have on the fine-tuning process, as it seems critical layers required for LoRA-based fine-tuning are not being patched.
Thank you for your assistance.