aidatatools / ollama-benchmark

LLM Benchmark for Throughput via Ollama (Local LLMs)
https://llm.aidatatools.com/
MIT License
64 stars 13 forks source link

feat: Add llama 3 models #7

Closed sammcj closed 2 months ago

sammcj commented 2 months ago

It would be good to replace llama 2 with llama 3 as 2 is a very old model now.

sammcj commented 2 months ago

By the way, I was able to modify the code myself to run against llama 3, I just didn't want to submit a PR without knowing the impact to your existing data.

You can see my results in: https://llm.aidatatools.com/results-macos.php?sort=false 😉

chuangtc commented 2 months ago

Thanks for your suggestions. Please use the latest 0.3.18 version

pip install llm-benchmark==0.3.18
llm_benchmark run

If it's done, please help close the ticket. Thank you.

sammcj commented 2 months ago

Oh wow that was quick!

Thank you.

llm_benchmark run
----------Apple Mac--------
----------
LLM models file path:/Users/samm/.venv/lib/python3.12/site-packages/llm_benchmark/data/benchmark_models_16gb_ram.yml
Checking and pulling the following LLM models
gemma:2b
gemma:7b
mistral:7b
llama3:8b
phi3:3.8b
llava:7b
llava:13b
----------
chuangtc commented 2 months ago

For Security and Privacy reason to protect developers. Hiding the details.