Closed miloskovacevic68 closed 1 month ago
@miloskovacevic68 If the model architecture is supported by unsloth you can fine tune yes, it doesn't need to be unsloth/ repo. You can use the notebooks samples presented on the page and replace the model name.
If you want to train a base model, use an instruct template such as alpaca. If you train a fine tune or instruct tuned model, use the chat template from the original model as it was trained on.
Thank you very much.
@miloskovacevic68 If the model architecture is supported by unsloth you can fine tune yes, it doesn't need to be unsloth/ repo. You can use the notebooks samples presented on the page and replace the model name.
If you want to train a base model, use an instruct template such as alpaca. If you train a fine tune or instruct tuned model, use the chat template from the original model as it was trained on.
-- Reply to this email directly or view it on GitHub: https://github.com/unslothai/unsloth/issues/1127#issuecomment-2406032427 You are receiving this because you were mentioned.
Message ID: @.***>
Hello, recently I pretrained a QWEN2 1.5B model from scratch using my tokenizer and domain dataset. The model was downloaded from HF "Qwen/Qwen2-1.5B"?
If the answer is yes could you suggest how to do that?
Can i use Unsloth to fine tune the model on my instruct dataset, or Unsloth can be used only with unsloth/
Thank you in advance, Milos