jzhang38 / TinyLlama

The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Apache License 2.0
7.31k stars 426 forks source link

Assessing performance of TinyLlama #101

Closed galleon closed 7 months ago

galleon commented 7 months ago

I think it would be good to see how the performance of TinyLlama & TinyLlama-chat evolve over the checkpoint. We can have this through the HL leaderboard but it is quite long.

What would you suggest to use as a benchmark to compare TinyLlama between versions ?

jzhang38 commented 7 months ago

We have inlcuded the GPT4All benchmark results in EVAL.md

galleon commented 7 months ago

Is the code also available on the repo - or it is not specific to TinyLlama ?

jzhang38 commented 7 months ago

https://github.com/jzhang38/TinyLlama/blob/965a9de01fa217584bdb89cb05caa0b21729ec80/EVAL.md?plain=1#L30