Jeffser / Alpaca

🦙 An Ollama client made with GTK4 and Adwaita
https://jeffser.com/alpaca
GNU General Public License v3.0
459 stars 45 forks source link

Alpaca crashes when asking Llama 3.1 a question #327

Closed polluxau closed 2 months ago

polluxau commented 2 months ago

Describe the bug A clear and concise description of what the bug is.

Asking Llama 3.1 a question like what is linux? it will start giving you the answer then will either instantly crash or will grey screen and then crash a couple of minutes later

Expected behavior A clear and concise description of what you expected to happen. Ask it the question and it shouldn't crash

Screenshots If applicable, add screenshots to help explain your problem.

https://github.com/user-attachments/assets/5e3a8d05-abf2-436a-9124-46fe8643696f

Debugging information

**
Gtk:ERROR:../gtk/gtkwidget.c:3902:gtk_widget_ensure_allocate_on_children: assertion failed: (!priv->resize_needed)
Bail out! Gtk:ERROR:../gtk/gtkwidget.c:3902:gtk_widget_ensure_allocate_on_children: assertion failed: (!priv->resize_needed)

Please paste here the debugging information available at 'About Alpaca' > 'Troubleshooting' > 'Debugging Information'

INFO [main.py | main] Alpaca version: 2.0.4 INFO [connection_handler.py | start] Starting Alpaca's Ollama instance... INFO [connection_handler.py | start] Started Alpaca's Ollama instance Error: listen tcp 127.0.0.1:11435: bind: address already in use INFO [connection_handler.py | start] client version is 0.3.11 INFO [connection_handler.py | request] GET : http://127.0.0.1:11435/api/tags ERROR [model_widget.py | update_local_list] ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ERROR [window.py | connection_error] Connection error INFO [connection_handler.py | reset] Resetting Alpaca's Ollama instance INFO [connection_handler.py | stop] Stopping Alpaca's Ollama instance INFO [connection_handler.py | stop] Stopped Alpaca's Ollama instance INFO [connection_handler.py | start] Starting Alpaca's Ollama instance... INFO [connection_handler.py | start] Started Alpaca's Ollama instance 2024/09/25 07:32:12 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/bazzite/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-09-25T07:32:12.541+10:00 level=INFO source=images.go:753 msg="total blobs: 5" time=2024-09-25T07:32:12.541+10:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-25T07:32:12.541+10:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11435 (version 0.3.11)" time=2024-09-25T07:32:12.542+10:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/home/bazzite/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3099812518/runners time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz time=2024-09-25T07:32:12.542+10:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz INFO [connection_handler.py | start] client version is 0.3.11 INFO [window.py | show_toast] There was an error with the local Ollama instance, so it has been reset time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/bazzite/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3099812518/runners/cpu/ollama_llama_server time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/bazzite/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3099812518/runners/cpu_avx/ollama_llama_server time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/bazzite/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3099812518/runners/cpu_avx2/ollama_llama_server time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/bazzite/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3099812518/runners/cuda_v11/ollama_llama_server time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/bazzite/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3099812518/runners/cuda_v12/ollama_llama_server time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/bazzite/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3099812518/runners/rocm_v60102/ollama_llama_server time=2024-09-25T07:32:18.294+10:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11]" time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-09-25T07:32:18.294+10:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-09-25T07:32:18.294+10:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2024-09-25T07:32:18.295+10:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA" time=2024-09-25T07:32:18.295+10:00 level=DEBUG source=gpu.go:467 msg="Searching for GPU library" name=libcuda.so time=2024-09-25T07:32:18.295+10:00 level=DEBUG source=gpu.go:490 msg="gpu library search" globs="[/app/lib/ollama/libcuda.so /app/lib/libcuda.so /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so /usr/lib/sdk/llvm15/lib/libcuda.so /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so /usr/lib/ollama/libcuda.so /app/plugins/AMD/lib/ollama/libcuda.so /usr/local/cuda/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib/libcuda.so /usr/local/cuda/lib/libcuda.so /usr/lib/libcuda.so /usr/local/lib/libcuda.so]" time=2024-09-25T07:32:18.296+10:00 level=DEBUG source=gpu.go:524 msg="discovered GPU libraries" paths=[] time=2024-09-25T07:32:18.296+10:00 level=DEBUG source=gpu.go:467 msg="Searching for GPU library" name=libcudart.so time=2024-09-25T07:32:18.296+10:00 level=DEBUG source=gpu.go:490 msg="gpu library search" globs="[/app/lib/ollama/libcudart.so /app/lib/libcudart.so /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so /usr/lib/sdk/llvm15/lib/libcudart.so /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so /usr/lib/ollama/libcudart.so /app/plugins/AMD/lib/ollama/libcudart.so /app/lib/ollama/libcudart.so /usr/local/cuda/lib64/libcudart.so /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so /usr/lib/x86_64-linux-gnu/libcudart.so /usr/lib/wsl/lib/libcudart.so /usr/lib/wsl/drivers//libcudart.so /opt/cuda/lib64/libcudart.so /usr/local/cuda/targets/aarch64-linux/lib/libcudart.so /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so /usr/lib/aarch64-linux-gnu/libcudart.so /usr/local/cuda/lib/libcudart.so /usr/lib/libcudart.so /usr/local/lib/libcudart.so*]" time=2024-09-25T07:32:18.296+10:00 level=DEBUG source=gpu.go:524 msg="discovered GPU libraries" paths="[/app/lib/ollama/libcudart.so.12.4.99 /app/lib/ollama/libcudart.so.11.3.109]" cudaSetDevice err: 35 time=2024-09-25T07:32:18.296+10:00 level=DEBUG source=gpu.go:536 msg="Unable to load cudart" library=/app/lib/ollama/libcudart.so.12.4.99 error="your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" cudaSetDevice err: 35 time=2024-09-25T07:32:18.296+10:00 level=DEBUG source=gpu.go:536 msg="Unable to load cudart" library=/app/lib/ollama/libcudart.so.11.3.109 error="your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" time=2024-09-25T07:32:18.296+10:00 level=WARN source=amd_linux.go:60 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-09-25T07:32:18.296+10:00 level=DEBUG source=amd_linux.go:103 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_linux.go:128 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_linux.go:103 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_linux.go:218 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29663 unique_id=0 time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_linux.go:252 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_linux.go:284 msg="amdgpu memory" gpu=0 total="10.0 GiB" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_linux.go:285 msg="amdgpu memory" gpu=0 available="9.0 GiB" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /app/lib/ollama" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /app/lib" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/lib/x86_64-linux-gnu/GL/default/lib" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/lib/x86_64-linux-gnu/openh264/extra" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/lib/x86_64-linux-gnu/openh264/extra" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/lib/sdk/llvm15/lib" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/lib/x86_64-linux-gnu/GL/default/lib" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/lib/ollama" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /app/plugins/AMD/lib/ollama" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /opt/rocm/lib" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/lib64" time=2024-09-25T07:32:18.297+10:00 level=DEBUG source=amd_common.go:18 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm" time=2024-09-25T07:32:18.297+10:00 level=WARN source=amd_linux.go:400 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2024-09-25T07:32:18.297+10:00 level=WARN source=amd_linux.go:323 msg="unable to verify rocm library, will use cpu" error="no suitable rocm found, falling back to CPU" time=2024-09-25T07:32:18.297+10:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered" time=2024-09-25T07:32:18.297+10:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="30.8 GiB" available="20.4 GiB"

polluxau commented 2 months ago

No longer occurs in 2.0.5 so closing :)

Jeffser commented 2 months ago

I'm glad that fixed it because I had no idea what was causing this error hahaha