kspviswa / local-packet-whisperer

A Fun project using Ollama, Streamlit & PyShark to chat with PCAP/PCAPNG files locally, privately!
MIT License
30 stars 11 forks source link

ollama keep crashing #8

Open younity-ENG opened 3 days ago

younity-ENG commented 3 days ago

hi

i've install the local-packet-whisperer on a Ubuntu 22.04 server. the UI is up, but after I successfully loaded the pcap file and start chatting in the UI the app crashes with a Connection error Connection failed with status 0.

see below the error log, what I'm missing here?

File "/home/ubuntu/local-packet-whisperer/env/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script exec(code, module.__dict__) File "/home/ubuntu/local-packet-whisperer/bin/lpw_main.py", line 133, in <module> full_response = chatWithModel(prompt=prompt, model=selected_model) File "/home/ubuntu/local-packet-whisperer/bin/lpw_prompt.py", line 22, in chatWithModel return oClient.chat(prompt=prompt, model=model, temp=0.4) File "/home/ubuntu/local-packet-whisperer/bin/lpw_ollamaClient.py", line 39, in chat response = ollama.chat(model=model, messages=self.messages, options=options) File "/home/ubuntu/local-packet-whisperer/env/lib/python3.10/site-packages/ollama/_client.py", line 162, in chat return self._request_stream( File "/home/ubuntu/local-packet-whisperer/env/lib/python3.10/site-packages/ollama/_client.py", line 82, in _request_stream return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json() File "/home/ubuntu/local-packet-whisperer/env/lib/python3.10/site-packages/ollama/_client.py", line 58, in _request raise ResponseError(e.response.text, e.response.status_code) from None

kspviswa commented 3 days ago

This appears to be issue with ollama client unable to reach ollama. Can you confirm ollama is running and the model you are trying to use is already pulled and available?

younity-ENG commented 2 days ago

hi

It runs but with an error, please take a look below. regarding the modules, I pull them using "ollama pull" command, and the commands are executed successfully and the modules are available via the UI.

image

`(env) ubuntu:~/local-packet-whisperer$ ollama --version ollama version is 0.1.48 (env) ubuntu:~/local-packet-whisperer$ sudo systemctl status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2024-07-02 13:11:08 UTC; 17h ago Main PID: 383 (ollama) Tasks: 11 (limit: 4666) Memory: 1.6G CPU: 14.344s CGroup: /system.slice/ollama.service └─383 /usr/local/bin/ollama serve

Jul 02 13:34:18 ip-172-30-0-181 ollama[383]: llm_load_print_meta: LF token = 13 '<0x0A>' Jul 02 13:34:18 ip-172-30-0-181 ollama[383]: llm_load_tensors: ggml ctx size = 0.15 MiB Jul 02 13:34:18 ip-172-30-0-181 ollama[383]: ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 4108181536 Jul 02 13:34:18 ip-172-30-0-181 ollama[383]: llama_model_load: error loading model: unable to allocate backend buffer Jul 02 13:34:18 ip-172-30-0-181 ollama[383]: llama_load_model_from_file: exception loading model Jul 02 13:34:18 ip-172-30-0-181 ollama[383]: terminate called after throwing an instance of 'std::runtime_error' Jul 02 13:34:18 ip-172-30-0-181 ollama[383]: what(): unable to allocate backend buffer Jul 02 13:34:19 ip-172-30-0-181 ollama[383]: time=2024-07-02T13:34:19.108Z level=ERROR source=sched.go:388 msg="error loading llama server" error="llama ru> Jul 02 13:34:19 ip-172-30-0-181 ollama[383]: [GIN] 2024/07/02 - 13:34:19 | 500 | 985.840682ms | 127.0.0.1 | POST "/api/chat" Jul 03 06:36:35 ip-172-30-0-181 ollama[383]: [GIN] 2024/07/03 - 06:36:35 | 200 | 42.819µs | 127.0.0.1 | GET "/api/version"`