Closed yuezhao238 closed 6 months ago
Yes, we also notice the overfitting in speech generation. A method not mentioned in the paper is to rewrite speeches by other LLMs, for example, we used Baichuan2 to augment speeches in the dataset. But it introduces the bias of LLMs and promptings. You can have a try on more recent models such as llama3, I think it would be much better.
Thanks for your reply!
Hi, thanks for this great work!
Did you analyse the behavior of finetuned model? I found the GLM show overfitting behavior after finetuning. Will you release your model weights?
Thanks for your precious time!