mikeizbicki / modulus-magnus-linguae

8 stars 6 forks source link

Summer interest #20

Open irajmoradi opened 1 year ago

irajmoradi commented 1 year ago

I am interested in working with LLama and fine tuning the LLAMA model to perform better in the Latin questions:

Week 3: Get training data, learn about fine tuning, learn about options for computation of fine-tuning, figure out what size of LLAMA to do first

Week 4-5: Begin to set up the fine-tuning process and fine tune

Week 6: Figure out how well the fine-tuning worked against the original model, potentially perform fine-tuning for a different parameter model

Week 7: Evaluate the new parameter model against the other parameter model, rinse-repeat with another parameter model

Week 8: Compile the information and data collected, Look through the process and see where I could have done things better.

mikeizbicki commented 1 year ago

You can find the llama weights on the lambda server at: /home/mizbicki/llama/models