chigkim / Ollama-MMLU-Pro

Apache License 2.0
64 stars 13 forks source link

Multiple Model Support #4

Open notasquid1938 opened 1 month ago

notasquid1938 commented 1 month ago

This is a great set of scripts! I was wondering if there was a way to modify the config.toml to list multiple models for benchmarking. That way the script doesn't have to be rerun for every new model.

kth8 commented 1 month ago

My solution so far has just been to use a while loop for example:

curl -s https://ollama.com/library/llama3.2 | awk -F'["/]' '/700/ && $4 ~ /:/ && $4 ~ /q/ && $4 !~ /base|text|fp16|q4_0|q4_1|q5_0|q5_1/ { print $4 }' > models_list.txt
while read line; do ollama pull $line; pipenv run python run_openai.py --model $line; done < models_list.txt