Closed gpengzhi closed 9 months ago
Thanks for the interest! haoranxu/ALMA-7B-Pretrain and haoranxu/ALMA-13B-Pretrain are the models which are only fine-tuned on monolingual data. To clarify, haoranxu/ALMA-7B-Pretrain-LoRA and haoranxu/ALMA-13B-Pretrain-LoRA are just LoRA for them in the MT task~
Great work!
Is there any plan to release the model checkpoint only with monolingual data finetuning?