Closed Goekdeniz-Guelmez closed 2 months ago
Added the --train-full flag to enable fine-tuning of the full model weights. Also updated the LORA.md and ACKNOWLEDGMENTS.md files.
--train-full
LORA.md
ACKNOWLEDGMENTS.md
Works as intended, tried it with tiny-random-Gemma, OpenELM-450M, and OpenELM-270M.
This looks nice and simple though how do you actually load the fine-tuned model? I don't think everything you need to load it is saved in the adapter_path?
adapter_path
This pull request introduces the following enhancements to the mlx-lm package:
Full Model Fine-Tuning:
Added the
--train-full
flag to enable fine-tuning of the full model weights. Also updated theLORA.md
andACKNOWLEDGMENTS.md
files.Works as intended, tried it with tiny-random-Gemma, OpenELM-450M, and OpenELM-270M.