aidatatools / ollama-benchmark

LLM Benchmark for Throughput via Ollama (Local LLMs)
https://llm.aidatatools.com/
MIT License
100 stars 14 forks source link

TypeError: 'NoneType' object is not subscriptable #6

Closed bushev closed 5 months ago

bushev commented 5 months ago

Hello, I could not make it work on Linux for some reason.

(.venv) (base) ubuntu@ubuntu-server:~/llm$ llm_benchmark run
-------Linux----------
error!
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/ubuntu/llm/.venv/lib/python3.10/site-packages/llm_benchmark/main.py:23 in run              │
│                                                                                                  │
│   20 @app.command()                                                                              │
│   21 def run(ollamabin: str = 'ollama' , sendinfo : bool = True ):                               │
│   22 │   sys_info = sysmain.get_extra()                                                          │
│ ❱ 23 │   print(f"Total memory size : {sys_info['memory']:.2f} GB")                               │
│   24 │   print(f"cpu_info: {sys_info['cpu']}")                                                   │
│   25 │   print(f"gpu_info: {sys_info['gpu']}")                                                   │
│   26 │   print(f"os_version: {sys_info['os_version']}")                                          │
│                                                                                                  │
│ ╭─────── locals ───────╮                                                                         │
│ │ ollamabin = 'ollama' │                                                                         │
│ │  sendinfo = True     │                                                                         │
│ │  sys_info = None     │                                                                         │
│ ╰──────────────────────╯                                                                         │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: 'NoneType' object is not subscriptable

python --version --> 3.10.12

llm_benchmark 0.3.15

chuangtc commented 5 months ago

What version of Ubuntu did you use? Ubuntu 22.04.4 server 64 bit amd64?

bushev commented 5 months ago

Hey, it's:

(.venv) (base) ubuntu@ubuntu-server:~/llm$ uname -a
Linux ubuntu-server 5.15.0-102-generic #112-Ubuntu SMP Tue Mar 5 16:50:32 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
(.venv) (base) ubuntu@ubuntu-server:~/llm$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"
chuangtc commented 5 months ago

I noticed you installed Ubuntu 20.04 then upgraded to 22.04 (The Linux Kernel is still old one). Not sure whether the grabbing GPU information failed in the middle.

Could you run the following and see if the bug got fixed?

pip install llm-benchmark==0.3.16
llm_benchmark run
bushev commented 5 months ago

Yes, the fix was successful! Instead of upgrading from 20.04, I installed the 22.04 ISO directly about five months ago.

bushev commented 5 months ago

BTW, it still fails but at the end of testing:

----------------------------------------
model_name =    llava:13b
prompt = Describe the image, /home/ubuntu/llm/.venv/lib/python3.10/site-packages/llm_benchmark/data/img/sample1.jpg
eval rate:            8.40 tokens/s
prompt = Describe the image, /home/ubuntu/llm/.venv/lib/python3.10/site-packages/llm_benchmark/data/img/sample2.jpg
eval rate:            8.19 tokens/s
prompt = Describe the image, /home/ubuntu/llm/.venv/lib/python3.10/site-packages/llm_benchmark/data/img/sample3.jpg
eval rate:            8.19 tokens/s
prompt = Describe the image, /home/ubuntu/llm/.venv/lib/python3.10/site-packages/llm_benchmark/data/img/sample4.jpg
eval rate:            8.32 tokens/s
prompt = Describe the image, /home/ubuntu/llm/.venv/lib/python3.10/site-packages/llm_benchmark/data/img/sample5.jpg
eval rate:            8.23 tokens/s
--------------------
Average of eval rate:  8.266  tokens/s
----------------------------------------
Sending the following data to a remote server
-------Linux----------
error! when retrieving cpu, gpu, os_version
Your machine UUID : fb297e04-c96c-56b6-8c25-83488cbe9f93
-------Linux----------
error! when retrieving cpu, gpu, os_version
{
    "mistral:7b": "14.30",
    "gemma:2b": "31.14",
    "gemma:7b": "11.29",
    "llama2:7b": "14.34",
    "llama2:13b": "7.98",
    "llava:7b": "14.22",
    "llava:13b": "8.27",
    "uuid": "fb297e04-c96c-56b6-8c25-83488cbe9f93",
    "ollama_version": "0.1.30"
}
----------
====================
-------Linux----------
error! when retrieving cpu, gpu, os_version
-------Linux----------
error! when retrieving cpu, gpu, os_version
{
    "system": "Linux",
    "memory": 160.98251724243164,
    "cpu": "AMD Ryzen Threadripper PRO 7975WX 32-Cores",
    "gpu": "unknown",
    "os_version": "unknown",
    "system_name": "Linux",
    "uuid": "fb297e04-c96c-56b6-8c25-83488cbe9f93"
}
chuangtc commented 5 months ago

Your data was sent to server and saved. Not sure which GPU did you install. https://llm.aidatatools.com/results-linux.php

bushev commented 5 months ago

I got it! I have no GPU 🤣