fe1ixxu / ALMA

State-of-the-art LLM-based translation models.
MIT License
439 stars 35 forks source link

A couple of questions for your theory #29

Closed gyupro closed 8 months ago

gyupro commented 9 months ago

Hello. I'm currently training my model based on the principles you've outlined.

I have a few inquiries I'd like to make.

  1. What's the reason behind selecting the llama2 model as the foundational model? Is it possible to utilize a different model, such as qwen or mistral, among others?

  2. Regarding CPO data, I possess a dataset comprising several thousand pairs. Is it feasible to train with this dataset while also applying it to CPO data simultaneously (My strategy includes creating synthetic data using gpt4 in conjunction with my own pre-trained model)?

  3. Have you ever tested a bigger model exceeding 13B? I was wondering if I can use more than 30B models as well

fe1ixxu commented 8 months ago

Thanks for your interest and sorry about the delayed response!

  1. The reason of choosing LLaMA-2 because it performs the best at zero-shot translation compared with other LLMs at the time I was doing the project. See Section 2 in the paper.
  2. I think it should be feasible.
  3. We have not tried ALMA recipe on a larger model, but it will be on its way!
gyupro commented 8 months ago

Thx for your insights. It helped me a lot.