Open Qt4arab opened 8 months ago
See comment here #6
I've added some initial pointers to this here: https://github.com/metavoiceio/metavoice-src/issues/70#issuecomment-1957337895
Hey @Qt4arab , we've just published an initial approach for finetuning the last N transformer blocks of the first stage LLM. Best to play around with the hyperparams in finetune_params.py
as we didn't determine the optimal set. Let us know if you have any issues or if you're up for contributing any improvements (via param sweep or otherwise!)
Next step to improve finetuning effectiveness is to have LoRA adapters for the first stage LLM which is being worked on here.
I have 50k high quality Arabic dataset,is possible to train the model on Arabic?