mlfoundations / scaling

Language models scale reliably with over-training and on downstream tasks
MIT License
93 stars 5 forks source link

Evaluating Top-1 error LLMFoundry #7

Open vishwa27yvs opened 2 months ago

vishwa27yvs commented 2 months ago

Hi Authors,

Thank you for the amazing work and the detailed exploration on scaling laws for over-training!

I found the idea of using top-1 error (and developing a scaling law for the same) to be quite interesting. I want to use this metric for certain evaluations, would it be possible to share the code for evaluating top-1 error? Since you used LLMFoundry, could you also share the metric class functions if it inherits from InContextLearningMetric as here https://github.com/mosaicml/llm-foundry/blob/a7b4056a17fb8ce3e484c888c55428b27e92816b/llmfoundry/eval/metrics/__init__.py

Thank you!