issues
search
aidatatools
/
ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
https://llm.aidatatools.com/
MIT License
62
stars
13
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
[Feature Request] Pull model through Ollama API instead of invoking ollama binary
#13
yeahdongcn
opened
1 day ago
1
Add WSL category for report
#12
nuffin
opened
3 weeks ago
1
Adding CPU/GPU distribution to the logs and reports
#11
dan-and
opened
3 weeks ago
3
format issue of install_requires argument in setup.py
#10
nuffin
closed
3 weeks ago
2
Support for small systems / SBCs
#9
dan-and
closed
3 weeks ago
10
Running on Non GPU laptops
#8
twelsh37
opened
1 month ago
5
feat: Add llama 3 models
#7
sammcj
closed
2 months ago
4
TypeError: 'NoneType' object is not subscriptable
#6
bushev
closed
2 months ago
7
Add check for required package lsb-core on linux
#5
felixboettger
closed
2 months ago
2
Why the prepared prompts of mistral are different from other models?
#4
nicholaslck
closed
3 months ago
1
flag for ollama executable path
#3
K1ngjulien
closed
3 months ago
1
can we release it as pypi package?
#2
larrycai
closed
3 months ago
1
Need to report tokens/sec
#1
dbabokin
closed
5 months ago
1