Closed Kimiko-AI closed 1 year ago
Thanks for the suggestions! Unfortunately that's probably not on the road map of this project, since this project is mainly aiming at scaling up the training, as most of the heavy liftings in this project is designed to enable researchers to easily scale up their training to a large cluster of accelerators. These heavy liftings are mostly not necessary in the settings when LoRA is needed.
LoRA fine-tuning is much faster and use less memory than normal fine-tuning.