mlabonne / llm-autoeval

Automatically evaluate your LLMs in Google Colab
MIT License
460 stars 77 forks source link

how can I add a new benchmark? I'm trying to evaluate a text to sql model! #27

Closed tariksghiouri closed 2 months ago

mlabonne commented 2 months ago

If your benchmark is not already supported by lighteval, you can fork the repo and add it to the runpod.sh file.