Closed kb-open closed 11 months ago
This repo is mainly designed for finetuning LLaMa2 upon various datasets. What do you mean by "Is LLaMA-2 fine-tuning supported?" ? Currently, we mainly support LLaMa2 finetuning. No plans for MPT / Falcon finetuning.
Support for models for fine-tuning seems to be very limited. Is LLaMA-2 fine-tuning supported? Will MPT / Falcon fine-tuning be supported?